Search Results

Search found 167 results on 7 pages for 'lexical analyser'.

Page 3/7 | < Previous Page | 1 2 3 4 5 6 7  | Next Page >

  • Why does 12.04 upgrade abort with out of space error when I have lots of it?

    - by Kristian Thomsen
    When upgrading Ubuntu from 11.10 to 12.04 I discovered an unexpected problem. The upgrade was stopped because there wasn't enough free space for the installation. I managed to free some space and do the upgrade but now a prompt appears after logging in saying I'm out of space. This prompt asks me if I want to examine the problem. The "Disk Usage Analyser" is opened. In the top it says: Total filesystem capacity: 47.0 GB (used: 13.5 GB available: 33.4 GB) Folder -- Usage -- Size / -- 100% -- 12.5 GB usr -- 44.8 % -- 5.6 GB home -- 30.3 % -- 3.8 GB lib -- 13.0 % -- 1.6 GB var -- 9.1 % -- 1.1 GB boot 2.5 % 309.5 GB and a lot of small contributors like: etc, opt, sbin, bin etc. I do not really understand this problem since the analyser in the top says that I have 33.4 GB left in this file system. What can I do to make Ubuntu use the remaining space? Running df -i in the terminal gives: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda7 610800 576874 33926 95% / udev 213451 563 212888 1% /dev tmpfs 218524 486 218038 1% /run none 218524 3 218521 1% /run/lock none 218524 7 218517 1% /run/shm /dev/sda8 2264752 16371 2248381 1% /home The output of df -h Filesystem Size Used Avail Use% Mounted on /dev/sda7 9,3G 7,8G 1,1G 88% / udev 993M 4,0K 993M 1% /dev tmpfs 401M 884K 400M 1% /run none 5,0M 0 5,0M 0% /run/lock none 1003M 152K 1002M 1% /run/shm /dev/sda8 35G 4,0G 29G 13% /home /dev/sda2 101G 64G 37G 64% /media/A2C8E28BC8E25CD3 Running sudo fdisk -l gives Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000080 Device Boot Start End Blocks Id System /dev/sda1 63 96389 48163+ de Dell Utility /dev/sda2 * 98304 210434488 105168092+ 7 HPFS/NTFS/exFAT /dev/sda3 210436094 312576704 51070305+ f W95 Ext'd (LBA) /dev/sda5 306279288 312576704 3148708+ dd Unknown /dev/sda6 210436096 214341631 1952768 82 Linux swap / Solaris /dev/sda7 214343680 233873407 9764864 83 Linux /dev/sda8 233875456 306278399 36201472 83 Linux Partition table entries are not in disk order

    Read the article

  • Is it justified to use project-wide unique function and variable names to help future refactoring?

    - by kahoon
    Refactoring tools (like ReSharper) often can't be sure whether to rename a given identifier when, for example refactoring a JavaScript function. I guess this is a consequence of JavaScript's dynamic nature. ReSharper solves this problem by offering to rename reasonable lexical matches too. The developer can opt out of renaming certain functions, if he finds that the lexical match is purely accidental. This means that the developer has to approve every instance that will be affected by the renaming. For example let's consider we have two Backbone classes which are used completely independently from each other in our application: var Header = Backbone.View.extend({ close: function() {...} }) var Dialog = Backbone.View.extend({ close: function() {...} }) If you try to rename Dialog's close method to for example closeAndNotify, then ReSharper will offer to rename all occurences of Header's close method just because they are the same lexically prior to the renaming. To avoid this problem, please consider this: var Header = Backbone.View.extend({ closeHeader: function() {...} }) var Dialog = Backbone.View.extend({ closeDialog: function() {...} }) Now you can rename closeDialog unambiguously - given that you have no other classes having a method with the same name. Is it worth it to name my functions this way to help future refactoring?

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • CPP extension and multiline literals in Haskell

    - by jetxee
    Is it possible to use CPP extension on Haskell code which contains multiline string literals? Are there other conditional compilation techniques for Haskell? For example, let's take this code: -- If the next line is uncommented, the program does not compile. -- {-# LANGUAGE CPP #-} msg = "Hello\ \ Wor\ \ld!" main = putStrLn msg If I uncomment {-# LANGUAGE CPP #-}, then GHC refutes this code with a lexical error: [1 of 1] Compiling Main ( cpp-multiline.hs, cpp-multiline.o ) cpp-multiline.hs:4:17: lexical error in string/character literal at character 'o' Using GHC 6.12.1, cpphs is available. I confirm that using cpphs.compat wrapper and -pgmP cpphs.compat option helps, but I'd like to have a solution which does not depend on custom shell scripts. -pgmP cpphs does not work. P.S. I need to use different code for GHC < 6.12 and GHC = 6.12, is it possible without preprocessor?

    Read the article

  • Laws of Computer Science and Programming

    - by Jonas
    We have Amdahl's law that basically states that if your program is 10% sequential you can get a maximum 10x performance boost by parallelizing your application. Another one is Wadler's law which states that In any language design, the total time spent discussing a feature in this list is proportional to two raised to the power of its position. 0. Semantics 1. Syntax 2. Lexical syntax 3. Lexical syntax of comments My question is this: What are the most important (or at least significant / funny but true / sad but true) laws of Computer Science and programming? I want named laws, and not random theorems, So an answer should look something like Surname's (law|theorem|conjecture|corollary...) Please state the law in your answer, and not only a link. Edit: The name of the law does not need to contain it's inventors surname. But I do want to know who stated (and perhaps proved) the law

    Read the article

  • Create a Counter within a For-Loop?

    - by Todd Hartman
    I am a novice programmer and apologize upfront for the complicated question. I am trying to create a lexical decision task for experimental research, in which respondents must decide if a series of letters presented on the screen make a "word" or "not a word". Everything works reasonably well except for the bit where I want to randomly select a word (category A) or nonword (category B) for each of 80 trials from a separate input file (input.txt). The randomization works, but some elements from each list (category A or B) are skipped because I have used "round.catIndex = j;" where "j" is a loop for each successive trial. Because some trials randomly select from Category A and other from Category B, "j" does not move successively down the list for each category. Instead, elements from the Category A list may be selected from something like 1, 2, 5, 8, 9, 10, and so on (it varies each time because of the randomization). To make a long story short(!), how do I create a counter that will work within the for-loop for each trial, so that every word and nonword from Category A and B, respectively, will be used for the lexical decision task? Everything I have tried thus far does not work properly or breaks the javascript entirely. Below is my code snippet and the full code is available at http://50.17.194.59/LDT/trunk/LDT.js. Also, the full lexical decision task can be accessed at http://50.17.194.59/LDT/trunk/LDT.php. Thanks! function initRounds() { numlst = []; for (var k = 0; k<numrounds; k++) { if (k % 2 == 0) numlst[k] = 0; else numlst[k] = 1; } numlst.sort(function() {return 0.5 - Math.random()}) for (var j = 0; j<numrounds; j++) { var round = new LDTround(); if (numlst[j] == 0) { round.category = input.catA.datalabel; } else if (numlst[j] == 1) { round.category = input.catB.datalabel; } // pick a category & stimulus if (round.category == input.catA.datalabel) { round.itemtype = input.catA.itemtype; round.correct = 1; round.catIndex = j; } else if (round.category == input.catB.datalabel) { round.itemtype = input.catB.itemtype; round.correct = 2; round.catIndex = j; } roundArray[i].push(round); } return roundArray; }

    Read the article

  • Problematic behavior of Linq Union?!

    - by Foxfire
    Hi, consider the following example: public IEnumerable<String> Test () { IEnumerable<String> lexicalStrings = new List<String> { "test", "t" }; IEnumerable<String> allLexicals = new List<String> { "test", "Test", "T", "t" }; IEnumerable<String> lexicals = new List<String> (); foreach (String s in lexicalStrings) lexicals = lexicals.Union (allLexicals.Where (lexical => lexical == s)); return lexicals; } I'd hoped for it to produce "test", "t" as output, but it does not (The output is only "t"). I'm not sure, but may have to do something with the deferred processing. Any ideas how to get this to work or for a good alternative?

    Read the article

  • Lucene.Net - How to treat a space-seperated phrase as a single token?

    - by Gareth D
    I've implemented a search facility using Lucene.Net. The index includes UK academic qualifications, including "A Level". I'd like the users to be able to search using the phrase "A Level", but using the Standrad Analyser the "A" is stripped out as a stop-word and therefore only "Level" is indexed/searched. What's my best option to work around this? I'm guessing I need to somehow tokenise "A Level" to "A-Level" or similar by creating a custom analyser. Is this the best approach? Note that I want don't want the whole search to be a phrase query. i.e. in my search box I want the user to be able to enter <"A Level" AND English Maths Physics and this would return any with "A Level" and either of English MAths or Physics. Question updated to reflect this.

    Read the article

  • Information Builders lance WebFocus Mobile BI, sa solution BI pour Smarphones et tablettes

    Information Builders lance WebFocus Mobile BI Sa solution BI pour Smarphones et tablettes Information Builders annonce le lancement de WebFOCUS Mobile BI, sa nouvelle solution de Business Intelligence qui permet de bénéficier des fonctionnalités analytiques sur mobiles via n'importe quel terminal, application mobile ou navigateur. La solution utilise les rapports WebFOCUS Active Technology en les optimisant pour les rendre compatibles avec tous les types de smartphone tels que l'iPhone, les téléphones sous Android, les Blackberry et les tablettes. Les rapports Active Technology permettent aux utilisateurs de manipuler et d'analyser les informations métiers sur leurs terminaux mobi...

    Read the article

  • Helix, mener une analyse forensique suite à une intrusion , par Pierre Therrode

    Avec l'ère de l'internet, nous sommes tous conscients d'être des cibles potentielles pour les pirates. En effet, la criminalité sur le Net est de plus en plus présente. Dans cette nouvelle perspectives de technologies, les intrusions dans un système touchant aussi bien les simples utilisateurs que les entreprises. Toutefois, il est possible que de tels agissements puissent être sauvegardés afin de les analyser comme éléments de preuve.

    Read the article

  • BizTalk Best Practice Analyzer v1.2 &ndash; BTS 06,06R2 + 09

    - by BizTalk Visionary
    BizTalk Best Practice Analyser is released and available for download. Download: BizTalkBPA V1.2 As always another very handy tool is the Message Box Viewer (Currently V10) which provides some very detailed information as well. Download: Message Box Viewer (MBV) Enjoy your day, Mick. Read the complete post at http://blogs.breezetraining.com.au/mickb/2010/03/31/BizTalkBestPracticeAnalyzerV12BTS0606R209.aspx

    Read the article

  • Les disques durs augmentent-ils de capacité assez rapidement ? De quel espace maximum à besoin un ut

    Les disques durs augmentent-ils de capacité assez rapidement ? De quel espace maximum à besoin un utilisateur ? Alors que Seagate vient d'annoncer la sortie imminente d'un disque dur de 3To, on peut s'interroger sur l'augmentation de la capacité des produits de stockage. L'espace sur les disques durs s'étend-il assez ? Et autant que prévu ? Diverses lois ont tenté d'analyser, systématiser et prédire le taux de croissance de la future évolution des disques durs (comme la

    Read the article

  • Windows Domain Chaos - Any Solving Approach

    - by Chake
    we are running an old Window 2003 Server as Domain Controller (DC2003). To safely migrate to Windows 2008 R2 we added a 2008 R2 (DC2008R2) to the domain as domain controller (adprep etc.). After dcpromo on DC2008R2 everything seemed to be ok. The new DC appeared under the "Domain Controlelrs" node. It wasn't checked at this time, if DC2008R2 can REALLY act as domain controller. Later we tried to shutdown DC2003 and ran into a total mess with non functional Exchange and Team Foundation Services. After that I got the job to fix... First i thought it could be an Problem with DC2008R2. So I removed it as Domain Controller and installed a new Windows 2008 R8 Server DC2008R2-2. I ran into similar Problems. I tried a bunch of stuff, but nothign helped. I won't list it, maybe I made an mistake, so I'm willing to redo it with your suggestions. To have a starting point I tried the best practise analyser whicht ended up with 24 "Compatible" and 26 "Not Compatible" tests. From these 26 tests 19 read the same. (I'm translating from german, so that may to be the exact wording) Problem: Using the Best Practise Analyser for Active Directory Domain Services (Active Directory Domain Services Best Practices Analyzer, AD DS BPA) no data can be be gathered using the name of the forest and the domain controller DC2008R2-2. I appreciate any suggestions, this really bothers me.

    Read the article

  • wifi hardware switch doesn't work on a Dell 1018

    - by user42566
    I have a problem with my Dell 1018 Inspiron. I can't switch the wifi on, through the key on the keyboard. I think it's a driver problem since Ubuntu 11.10. This are the versions i tried: Ubuntu 10.04 / 10.10 It's possible to install the driver by hand: sudo add-apt-repository ppa:lexical/hwe-wireless sudo apt-get update sudo apt-get install rtl8192ce-dkms Ubuntu 11.04 It works out of the box Ubuntu 11.10 / 12.04 I haven’t found any solution for these versions. The "ppa:lexical/hwe-wireless" doesn't work for these versions. It says Can not find package rtl8192ce-dkms. The window of additional drivers is empty. So I can't install the driver. The wired network works good. Here is some information: 0: dell-wifi: Wireless LAN Soft blocked: no Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: yes sudo lshw -class network *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:05:00.0 logical name: eth0 version: 05 serial: 5c:26:0a:0d:20:10 size: 10Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half firmware=rtl_nic/rtl8105e-1.fw latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:43 ioport:2000(size=256) memory:f0f2c000-f0f2cfff memory:f0f18000-f0f1bfff *-network description: Wireless interface product: RTL8188CE 802.11b/g/n WiFi Adapter vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:07:00.0 logical name: wlan0 version: 01 serial: 70:f1:a1:fe:15:bd width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=rtl8192ce driverversion=3.2.0-22-generic-pae firmware=N/A ip=192.168.1.76 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn resources: irq:17 ioport:3000(size=256) memory:f0100000-f0103fff mark@mark-Inspiron-1018:~$ mark@mark-Inspiron-1018:~$ sudo lspci -nn 00:00.0 Host bridge [0600]: Intel Corporation N10 Family DMI Bridge [8086:a010] 00:02.0 VGA compatible controller [0300]: Intel Corporation N10 Family Integrated Graphics Controller [8086:a011] 00:02.1 Display controller [0380]: Intel Corporation N10 Family Integrated Graphics Controller [8086:a012] 00:1b.0 Audio device [0403]: Intel Corporation N10/ICH 7 Family High Definition Audio Controller [8086:27d8] (rev 02) 00:1c.0 PCI bridge [0604]: Intel Corporation N10/ICH 7 Family PCI Express Port 1 [8086:27d0] (rev 02) 00:1c.1 PCI bridge [0604]: Intel Corporation N10/ICH 7 Family PCI Express Port 2 [8086:27d2] (rev 02) 00:1d.0 USB controller [0c03]: Intel Corporation N10/ICH 7 Family USB UHCI Controller #1 [8086:27c8] (rev 02) 00:1d.1 USB controller [0c03]: Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 [8086:27c9] (rev 02) 00:1d.2 USB controller [0c03]: Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 [8086:27ca] (rev 02) 00:1d.3 USB controller [0c03]: Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 [8086:27cb] (rev 02) 00:1d.7 USB controller [0c03]: Intel Corporation N10/ICH 7 Family USB2 EHCI Controller [8086:27cc] (rev 02) 00:1e.0 PCI bridge [0604]: Intel Corporation 82801 Mobile PCI Bridge [8086:2448] (rev e2) 00:1f.0 ISA bridge [0601]: Intel Corporation NM10 Family LPC Controller [8086:27bc] (rev 02) 00:1f.2 SATA controller [0106]: Intel Corporation N10/ICH7 Family SATA Controller [AHCI mode] [8086:27c1] (rev 02) 00:1f.3 SMBus [0c05]: Intel Corporation N10/ICH 7 Family SMBus Controller [8086:27da] (rev 02) 05:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 05) 07:00.0 Network controller [0280]: Realtek Semiconductor Co., Ltd. RTL8188CE 802.11b/g/n WiFi Adapter [10ec:8176] (rev 01) mark@mark-Inspiron-1018:~$ mark@mark-Inspiron-1018:~$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 002: ID 174f:1127 Syntek mark@mark-Inspiron-1018:~$

    Read the article

  • Efficient Context-Free Grammar parser, preferably Python-friendly

    - by Max Shawabkeh
    I am in need of parsing a small subset of English for one of my project, described as a context-free grammar with (1-level) feature structures (example) and I need to do it efficiently . Right now I'm using NLTK's parser which produces the right output but is very slow. For my grammar of ~450 fairly ambiguous non-lexicon rules and half a million lexical entries, parsing simple sentences can take anywhere from 2 to 30 seconds, depending it seems on the number of resulting trees. Lexical entries have little to no effect on performance. Another problem is that loading the (25MB) grammar+lexicon at the beginning can take up to a minute. From what I can find in literature, the running time of the algorithm used to parse such a grammar (Earley or CKY) should be linear to the size of the grammar and cubic to the size of the input token list. My experience with NLTK indicates that ambiguity is what hurts the performance most, not the absolute size of the grammar. So now I'm looking for a CFG parser to replace NLTK. I've been considering PLY but I can't tell whether it supports feature structures in CFGs, which are required in my case, and the examples I've seen seem to be doing a lot of procedural parsing rather than just specifying a grammar. Can anybody show me an example of PLY both supporting feature structs and using a declarative grammar? I'm also fine with any other parser that can do what I need efficiently. A Python interface is preferable but not absolutely necessary.

    Read the article

  • How does 'lazy' work?

    - by Matt Fenwick
    What is the difference between these two functions? I see that lazy is intended to be lazy, but I don't understand how that is accomplished. -- | Identity function. id :: a -> a id x = x -- | The call '(lazy e)' means the same as 'e', but 'lazy' has a -- magical strictness property: it is lazy in its first argument, -- even though its semantics is strict. lazy :: a -> a lazy x = x -- Implementation note: its strictness and unfolding are over-ridden -- by the definition in MkId.lhs; in both cases to nothing at all. -- That way, 'lazy' does not get inlined, and the strictness analyser -- sees it as lazy. Then the worker/wrapper phase inlines it. -- Result: happiness Tracking down the note in MkId.lhs (hopefully this is the right note and version, sorry if it's not): Note [lazyId magic] ~~~~~~~~~~~~~~~~~~~ lazy :: forall a?. a? -> a? (i.e. works for unboxed types too) Used to lazify pseq: pseq a b = a `seq` lazy b Also, no strictness: by being a built-in Id, all the info about lazyId comes from here, not from GHC.Base.hi. This is important, because the strictness analyser will spot it as strict! Also no unfolding in lazyId: it gets "inlined" by a HACK in CorePrep. It's very important to do this inlining after unfoldings are exposed in the interface file. Otherwise, the unfolding for (say) pseq in the interface file will not mention 'lazy', so if we inline 'pseq' we'll totally miss the very thing that 'lazy' was there for in the first place. See Trac #3259 for a real world example. lazyId is defined in GHC.Base, so we don't have to inline it. If it appears un-applied, we'll end up just calling it. I don't understand that because it refers to lazyId instead of lazy. How does lazy work?

    Read the article

  • emacs lisp mapcar doesn't apply function to all elements?

    - by Stephen
    Hi, I have a function that takes a list and replaces some elements. I have constructed it as a closure so that the free variable cannot be modified outside of the function. (defun transform (elems) (lexical-let ( (elems elems) ) (lambda (seq) (let (e) (while (setq e (car elems)) (setf (nth e seq) e) (setq elems (cdr elems))) seq)))) I call this on a list of lists. (defun tester (seq-list) (let ( (elems '(1 3 5)) ) (mapcar (transform elems) seq-list))) => ((10 1 8 3 6 5 4 3 2 1) ("a" "b" "c" "d" "e" "f")) It does not seem to apply the function to the second element of the list provided to tester(). However, if I explicitly apply this function to the individual elements, it works... (defun tester (seq-list) (let ( (elems '(1 3 5)) ) (list (funcall (transform elems) (car seq-list)) (funcall (transform elems) (cadr seq-list))))) => ((10 1 8 3 6 5 4 3 2 1) ("a" 1 "c" 3 "e" 5)) If I write a simple function using the same concepts as above, mapcar seems to work... What could I be doing wrong? (defun transform (x) (lexical-let ( (x x) ) (lambda (y) (+ x y)))) (defun tester (seq) (let ( (x 1) ) (mapcar (transform x) seq))) (tester (list 1 3)) => (2 4) Thanks

    Read the article

  • Programmaticaly finding the Landau notation (Big O or Theta notation) of an algorithm?

    - by Julien L
    I'm used to search for the Landau (Big O, Theta...) notation of my algorithms by hand to make sure they are as optimized as they can be, but when the functions are getting really big and complex, it's taking way too much time to do it by hand. it's also prone to human errors. I spent some time on Codility (coding/algo exercises), and noticed they will give you the Landau notation for your submitted solution (both in Time and Memory usage). I was wondering how they do that... How would you do it? Is there another way besides Lexical Analysis or parsing of the code? PS: This question concerns mainly PHP and or JavaScript, but I'm opened to any language and theory.

    Read the article

  • Oracle : du CRM au CX, Fusion CRM se met à l'heure de « l'expérience client »

    Oracle : du CRM au CX Fusion CRM se met à l'heure de « l'expérience client » Les temps changent, la manière d'analyser les comportements aussi. Pour Oracle, il était donc temps (d'après l'éditeur lui-même) de faire évoluer en profondeur Fusion CRM. De passage à Paris pour présenter les évolutions de son offre d'analyse de relations clients, David TICE ? vice-président d'Orcale ne charge des produits CRM - a fait passer un message clair : pour lui, la transformation du CRM vers le CX a commencé. [IMG]http://ftp-developpez.com/gordon-fowler/Orcale%20CRM1.png[/IMG] Qu'est-ce que le CX ? L'expérience (X) consommateur ( C ). Amazon avec ses listes et ces re...

    Read the article

  • Tutoriel Java : Comprendre le fonctionnement de RMI (Remote Method Invocation), par Alain Defrance

    Bonjour, Le RMI (ou Remote Method Invocation) est très souvent utilisé, soit directement, soit dans des couches sous-jacentes. RMI est par exemple utilisé pour exposer des EJB SessionBeans. Notre objectif va être de démystifier RMI en comprenant ses méchanismes. Nous allons analyser comment une invocation à distance est possible en allant jusqu'a implémenter notre propre version allégée de RMI. Bien entendu, afin de nous focaliser sur les objectifs de cet article, certains prérequis sont nécessaires. Aucun rappel sur l'utilisation la gestion d'un réseau en Java ne sera fait. Article disponible ici : http://alain-defrance.developpez.com...2SE/micro-rmi/...

    Read the article

  • GenPlay : un analyseur open-source et rapide de génomes développé par des scientifiques et librement disponible

    GenPlay : un analyseur open-source et rapide de génomes Développé par des scientifiques, il est librement disponible Des scientifiques du collège de médecine Albert Einstein de l'université de Yeshiva de New York ont développé une application desktop d'analyse du génome (dont celui de l'humain). Baptisée GenPlay cette, cette solution open-source fonctionne avec un navigateur et permet aux biologistes de rapidement et facilement analyser et traiter leurs données à haut débit. Actuellement, les informations de l'ensemble du matériel génétique d'un individu ou d'une espèce sont codées dans son ADN. Elles sont analysées principalement par des spécialistes de l'information plutôt que pa...

    Read the article

  • Display folder sizes in file manager

    - by wim
    In nautilus (or nemo) file manager, the "Size" column shows the filesize for files and the number of items contained in a folder for subdirectories: Number of items is not that important for me, it would be more useful if I could make this column show the total size contained under the directory. I had an extension on windows called foldersize which shows what I mean: I think it involved a service which ran in the background monitoring filesystem modifications in order to make sure the column was kept up to date. I am interested to know if there is any similar extension to nautilus, I would also be open to switching to another file manager to get this functionality. I am aware of the Disk Usage Analyser in Ubuntu, but what I'm looking for is a solution with file manager integration.

    Read the article

  • Whats the thing the report bugs in php?

    - by Max Hazard
    Currently I am learning php. Php is understood by browser itself right from php sdk right? SDK include libraries right? So browser is like an interpreter of php codes. I want to know that whenever I type a wrong php syntax what is the thing report me the error? Obviously the browser is reporting the error. But what part of it? I mean I don't get it. Like writing a compiler we do lexical analysis and make the compiler which report any bug in source code. I assume here browser is analogous to compiler. I don't know exactly but compiler contains bug report functions or methods which is debugger. Debugger is part of compiler which report bugs. Does the browser contains such debuggers? Can there be any browser which doesn't understand php?

    Read the article

  • Can someone provide a short code example of compiler bootstrapping?

    - by Jatin
    This Turing award lecture by Ken Thompson on topic "Reflections on Trusting Trust" gives good insight about how C compiler was made in C itself. Though I understand the crux, it still hasn't sunk in. So ultimately, once the compiler is written to do lexical analysis, parse trees, syntax analysis, byte code generation etc, a separate machine code is again written to do all that on compiler? Can anyone please explain with a small example of the procedure? Bootstrapping on wiki gives good insights, but only a rough view on it. PS: I am aware of the duplicates on the site, but found them to be an overview which I am already aware

    Read the article

  • Google Closure Compiler - what does the name mean?

    - by mikez302
    I am curious about the Google Closure Compiler. Why did they name it that? Does it have anything to do with lexical closures? EDIT: I tried researching it in the FAQ and documentation, as well as doing Google searches such as "closure compiler name". I couldn't find anything definite, hence the reason I am asking. I don't think I will get a profoundly helpful answer but I was hoping that I could at least satisfy my curiosity. I am not trying to solve a specific problem. I am just curious.

    Read the article

< Previous Page | 1 2 3 4 5 6 7  | Next Page >