Search Results

Search found 29638 results on 1186 pages for 'phone number'.

Page 715/1186 | < Previous Page | 711 712 713 714 715 716 717 718 719 720 721 722  | Next Page >

  • What follows after lexical analysis?

    - by madflame991
    I'm working on a toy compiler (for some simple language like PL/0) and I have my lexer up and running. At this point I should start working on building the parse tree, but before I start I was wondering: How much information can one gather from just the string of tokens? Here's what I gathered so far: One can already do syntax highlighting having only the list of tokens. Numbers and operators get coloured accordingly and keywords also. Autoformatting (indenting) should also be possible. How? Specify for each token type how many white spaces or new line characters should follow it. Also when you print tokens modify an alignment variable (when the code printer reads "{" increment the alignment variable by 1, and decrement by 1 for "}". Whenever it starts printing on a new line the code printer will align according to this alignment variable) In languages without nested subroutines one can get a complete list of subroutines and their signature. How? Just read what follows after the "procedure" or "function" keyword until you hit the first ")" (this should work fine in a Pascal language with no nested subroutines) In languages like Pascal you can even determine local variables and their types, as they are declared in a special place (ok, you can't handle initialization as well, but you can parse sequences like: "var a, b, c: integer") Detection of recursive functions may also be possible, or even a graph representation of which subroutine calls who. If one can identify the body of a function then one can also search if there are any mentions of other function's names. Gathering statistics about the code, like number of lines, instructions, subroutines EDIT: I clarified why I think some processes are possible. As I read comments and responses I realise that the answer depends very much on the language that I'm parsing.

    Read the article

  • How to motivate visitors to comment

    - by Michal
    At first I must apologize, because I am not sure if this question is valid for webmasters topic. I deal with the problem as being webmaster, however, i think this question is more related with marketing. Nevertheless, I was searching for marketing stack-overflow at meta stack-overflow and did not find such page. Background Four days ago, I launched a portal with database of barber salons at which people can find a salon through various criterions, see its photos, details, and also put a comment with their own opinion. The development took me half a year and it took me other 2 months to fill the database with information about barbers (I've also hired another three people to this job). I have not a big problem with getting people to my portal, I pay for PPC, comment on barber discussions etc.. In past four days I've reached a satisfactory number of visitors. Problem I deal with fact that everyone wants to search and read comments, but no one is willing to put her/his own opinion to barber. So I've tried following (2 days ago): Made comment anonymous, no one has to be afraid of compromise her/his identity with a salon owner I prepared a competition for users in which they can win a cosmetic package if they comment on at least three different salons I payed for PPC campaign on facebook which is telling people about the competition I registered competition on 20 portals for competitions And the result: People are commenting on facebook that the competition is a good idea They are giving likes on facebook But no one put a single comment to a barber salon I am getting little confused about what am I doing wrong. I will be thankful for any advice.

    Read the article

  • When using TCP load balancing with HAProxy, does all outbound traffic flow through the LB?

    - by user122875
    I am setting up an app to be hosted using VMs(probably amazon, but that is not set in stone) which will require both HTTP load balancing and load balancing a large number(50k or so if possible) of persistant TCP connections. The amount of data is not all that high, but updates are frequent. Right now I am evaluating load balancers and am a bit confused about the architecture of HAProxy. If I use HAProxy to balance the TCP connections, will all the resulting traffic have to flow through the load balancer? If so, would another solution(such as LVS or even nginx_tcp_proxy_module) be a better fit?

    Read the article

  • Help Creating a Google Analytics Funnel for Check out process

    - by Drew
    have a funnel question. I am currently working on tracking (through GA) guest and logged in member activity once they get to my sites shopping cart. But need help with setting up funnels. Specifically to see; Total sales Logged in member total sales List item Guest member sales The urls associated to the check out proces are: Logged in members /cart (arriving to checkout) /checkout (checking out as a logged in member) /checkout/confirmation (thank you - confirmed sale) Guest members - /cart (arriving to checkout) - /checkout-guest (checking out as a guest) - /checkout/confirmation (thanks you - confirmed sale) I've tested the funnels set up for the above with 9 transactions. But the end maths doesn't seem to line up. Total sales funnel shows 9 completed transactions when only tracking these to urls: - /cart - /checkout/confirmation Which is great - cause it's working Logged in member sales show a total of 9 completed transactions based on each step of the logged in url steps (above) being tracked in a funnel. Not good because this number should be 3. Guest check out funnel (see guest steps above) shows 9 as well. What the?!?!?!? The results I am looking for should reflect the following - total sales = 9, logged in members = 3, guest members = 6 Is there any way to set these urls up so that the funnels report the correct results - or do I need to changed the urls and provide logged in members and guest stand alone purchase confirmation pages (this would mean I can not track total sales which combine results from both streams)? Any knowledge in this area is welcome. Thanks.

    Read the article

  • Mod_rewrite not working on ISPConfig 3 Server

    - by Akahadaka
    Problem I recently migrated a Drupal site from a shared hosting server to my own VM. Everything appears to work correctly, except clean urls. My VM Setup Ubuntu 10.04 LAMP ISPConfig 3 What I've tried From reading up on a number of drupal forums I've tried the following in this order Check that mod_rewrite is installed and enabled Changed PHP from FastCGI to Mod_PHP (prefer to use FastCGI or suPHP though to avoid having tmp/files folders with 777 permissions) Changed the Redirect type to L in ISPConfig Sites-domain.com-Redirect Changed /etc/apache2/sites-enabled/000-default <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All ... </Directory> Not sure about points 3 and 4, I do want all domains to be able to use mod_rewrite out of the box. Question Have I done something wrong or am I missing a step? Ultimately I would like to use FastCGI and clean urls working on all ISPConfig 3 domains without having to make any changes to individual domain settings. Any ideas appreciated, I'll try them all.

    Read the article

  • configuring USB modem( Huawei EC156) in ubuntu 13.10

    - by user205427
    I am facing difficulty in installing my USB modem in Ubuntu 13.10. Contrary to what many have suggested,it does not get detected automatically, nor does setting a new connection help. USB Device is listed in lsusb, but not under network manager or Devices, it is detected as a CD-ROM, what I understood from the web was that usb-modeswitch can be used to switch it to a USB device. Even 'Enable Mobile Broadband' option is not shown in network manager. What was interesting is when I start laptop with windows 7 and use the USB modem and after that restart with Ubuntu, both Enable Broadband and the mobile broadband connection can be seen. Sadly, internet connection could not be installed. I tried using USB-modeswitch command as suggested somewhere, but it does not seem to work. Following is the message. Take all parameters from the command line * usb_modeswitch: handle USB devices with multiple modes * Version 2.0.1 (C) Josua Dietze 2013 * Based on libusb1/libusbx ! PLEASE REPORT NEW CONFIGURATIONS ! DefaultVendor= 0x12d1 DefaultProduct= 0x1505 HuaweiMode=1 NeedResponse=0 InquireDevice enabled (default) Look for default devices ... found USB ID 8087:0020 found USB ID 1d6b:0002 found USB ID 0461:4db6 found USB ID 12d1:1505 vendor ID matched product ID matched found USB ID 138a:0007 found USB ID 03f0:231d found USB ID 8087:0020 found USB ID 1d6b:0002 Found devices in default mode (1) Access device 005 on bus 001 Get the current device configuration ... OK, got current device configuration (1) Use interface number 0 Use endpoints 0x08 (out) and 0x87 (in) Inquire device details; driver will be detached ... Looking for active driver ... OK, driver detached INQUIRY message failed (error -9) USB description data (for identification) ------------------------- Manufacturer: HUA?WEI TECHNOLOGIES Product: HUAWEI Mobile Serial No.: ??????????????????? ------------------------- Send old Huawei control message ... -> Run lsusb to note any changes. Bye! I am stuck with this problem for 4 days now, any help would be appreciated

    Read the article

  • What are the advantages of programming to under an OS as opposed to bare metal executive?

    - by gby
    Assume you are presented with an embedded system application to program, in C, on a multi-core environment (think a Cavium or Tilera) and need to choose between two environments: Code the application under Linux in SMP mode or code the application under a thin bare metal executive (something like a very minimal RTOS), perhaps with a single core running UP Linux that can serve control tasks. For the purpose of this question, assume that both environment provide the same level of performance guarantees in any measurable aspects of run time performance, including number of meaningful action per second, jitter, latency, real time considerations - the works. (and yes, I realize this is by far not a trivial assumption at all, bare with me). How would you justify going with a Linux SMP based solution rather then a bare metal thin executive solution? The question may seems silly. It certainly seems obvious to me - but I have to convince someone that does not think the same. Could you help make a list of arguments in favor of choosing a real SMP aware OS (Linux) vs. a bare metal executive assuming performance guarantees are NOT an issue? Many thanks

    Read the article

  • Executing NUnit Tests using the Visual Studio 2012 Test Runner

    - by David Paquette
    At a recent Visual Studio 2012 event at the Calgary .NET User Group, I was told that I could run my NUnit tests directly in the Visual Studio 2012 without any special plugins.  Naturally, I was very excited and I immediately tried running my NUnit tests. I was somewhat disappointed to see that the Test Runner did not discover any of my NUnit tests.  Apparently, you do still need to install an extension that supports NUnit.  Microsoft has completely re-written the Test Runner in Visual Studio 2012 and opened it up for anyone to write Test Adapters for any unit test framework (not just MSTest).  Once the correct test adapters are installed, everything works great.  Luckily, there are a good number of adapters already written. Here are some Test Adapters that you might find useful: NUnit Test Adapter – This one is still in beta, but tit does work with the official Visual Studio 2012 release xUnit.net Test Adapter Silverlight Unit Test Adapter Chutzpah Test Adapter Overall, I still prefer the unit test runner in ReSharper, but this is a great new feature for those who might not have a ReSharper license.

    Read the article

  • Run database checks but omit large tables or filegroups - New option in Ola Hallengren's Scripts

    - by Greg Low
    One of the things I've always wanted in DBCC CHECKDB is the option to omit particular tables from the check. The situation that I often see is that companies with large databases often have only one or two very large tables. They want to run a DBCC CHECKDB on the database to check everything except those couple of tables due to time constraints. I posted a request on the Connect site about time some time ago: https://connect.microsoft.com/SQLServer/feedback/details/611164/dbcc-checkdb-omit-tables-option The workaround from the product team was that you could script out the checks that you did want to carry out, rather than omitting the ones that you didn't. I didn't overly like this as a workaround as clients often had a very large number of objects that they did want to check and only one or two that they didn't. I've always been impressed with the work that our buddy Ola Hallengren has done on his maintenance scripts. He pinged me recently about my old Connect item and said he was going to implement something similar. The good news is that it's available now. Here are some examples he provided of the newly-supported syntax: EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKDB' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'ALL_OBJECTS,-AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'AdventureWorks.PRIMARY' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'ALL_FILEGROUPS,-AdventureWorks.PRIMARY' Note the syntax to omit an object from the list of objects and the option to omit one filegroup. Nice! Thanks Ola! You'll find details here: http://ola.hallengren.com/  

    Read the article

  • Handling extremely large numbers in a language which can't?

    - by Mallow
    I'm trying to think about how I would go about doing calculations on extremely large numbers (to infinitum - intergers no floats) if the language construct is incapable of handling numbers larger than a certain value. I am sure I am not the first nor the last to ask this question but the search terms I am using aren't giving me an algorithm to handle those situations. Rather most suggestions offer a language change or variable change, or talk about things that seem irrelevant to my search. So I need a little guideance. I would sketch out an algorithm like this: Determine the max length of the integer variable for the language. If a number is more than half the length of the max length of the variable split it in an array. (give a little play room) Array order [0] = the numbers most to the right [n-max] = numbers most to the left Ex. Num: 29392023 Array[0]:23, Array[1]: 20, array[2]: 39, array[3]:29 Since I established half the length of the variable as the mark off point I can then calculate the ones, tenths, hundredths, etc. Place via the halfway mark so that if a variable max length was 10 digits from 0 to 9999999999 then I know that by halfing that to five digits give me some play room. So if I add or multiply I can have a variable checker function that see that the sixth digit (from the right) of array[0] is the same place as the first digit (from the right) of array[1]. Dividing and subtracting have their own issues which I haven't thought about yet. I would like to know about the best implementations of supporting larger numbers than the program can.

    Read the article

  • setting nproc in /etc/security/limit.conf prevents ssh login

    - by omry
    I am trying to use /etc/security/limit.conf on Linux (Debian) to limit the number of processes per user. for starters, I tried to limit my own user processes by adding this to /etc/security/limit.conf: omry hard nproc 100 this locked my user out of ssh. I could open new processes (verified with su omry), but could not log into ssh with that user : sshd reported this in it's log: fatal: setreuid 1000: Resource temporarily unavailable also, I am certain my user is not running anything near 100 processes (actually 6). what can be the reason for this?

    Read the article

  • HALEVT troubleshooting: VFAT usb storage device gets mounted with root:root user:group

    - by Nova deViator
    Hi, i'm banging my head for number of days around this problem. using Halevt for automounting, everything mostly works, but the only thing is that Halevt mounts external USB storage devices as root. So, as user i cannot write to files on them. Halevt gets run as halevt user on boot through /etc/init.d script. This is Ubuntu Lucid with Awesome WM. No GDM. Running halevt as user seem to not work (halevt runs but doesn't respond on Insert) I know HAL is deprecated and removed and i should probably write my own UDEV rules, but until then it seems there must a be simple hack that enables mounting VFAT/NTFS devices with specific uid/gid. this question/answer helps a lot, but not specifically to the above.

    Read the article

  • MSCC: Global Windows Azure Bootcamp - 29th March 2014

    The Mauritius Software Craftsmanship Community proudly presents you the Global Windows Azure Bootcamp 2014 in Mauritius. Global Windows Azure Bootcamp 2014 in Mauritius - MSCC together with Microsoft, Ceridian and Emtel We are very happy and excited about our participation in this global event and would like to draw your attention to the official invitation letter below. Please sign up and RSVP on the official website of the MSCC. Participation is for free! Call for action Please create more awareness of this event in Mauritius and use the hash tag #gwabmru as well as the shortened link: http://aka.ms/gwabmru And remember: Sharing is Caring! Official invitation letter to the GWAB 2014 in Mauritius With over 130 confirmed locations around the globe, the Global Windows Azure Bootcamp is going to be a truly memorable event - and now here's your chance to take part! In April of 2013 we held the first Global Windows Azure Bootcamp at more than 90 locations around the globe! This year we want to again offer up a one day deep dive class to help thousands of people get up to speed on discovering Cloud Computing Applications for Windows Azure. In addition to this great learning opportunity the hands on labs will feature pooling a huge global compute farm to perform diabetes research! In Mauritius, the event will be organised by Microsoft Indian Ocean Islands & French Pacific in partnership with The Mauritius Software Craftsmanship Community (MSCC) and sponsored by Microsoft, Ceridian and Emtel. What do I need to bring?  You will need to bring your own computer which can run Visual Studio 2012 or 2013 (i.e. Windows, OSX, Ubuntu with virtualization, etc.) and have it preloaded with the following: Visual Studio 2012 or 2013 The Windows Azure SDK - http://www.windowsazure.com/en-us/develop/net/ Optionally (or if you will not be doing just .NET labs), the following can also be installed: Node.js SDK - http://www.windowsazure.com/en-us/develop/nodejs/ JAVA SDK - http://www.windowsazure.com/en-us/develop/java/ Doing mobile? Android? iOS? Windows Phone or Windows 8? - http://www.windowsazure.com/en-us/develop/mobile/ PHP - http://www.windowsazure.com/en-us/develop/php/ More info here: http://www.windowsazure.com/en-us/documentation Important: Please do the installation upfront as there will be very little time to troubleshoot installations during the day.  

    Read the article

  • How to use Pixel Bender (pbj) in ActionScript3 on large Vectors to make fast calculations?

    - by Arthur Wulf White
    Remember my old question: 2d game view camera zoom, rotation & offset using 'Filter' / 'Shader' processing? I figured I could use a Pixel Bender Shader to do the computation for any large group of elements in a game to save on processing time. At least it's a theory worth checking. I also read this question: Pass large array to pixel shader Which I'm guessing is about accomplishing the same thing in a different language. I read this tutorial: http://unitzeroone.com/blog/2009/03/18/flash-10-massive-amounts-of-3d-particles-with-alchemy-source-included/ I am attempting to do some tests. Here is some of the code: private const SIZE : int = Math.pow(10, 5); private var testVectorNum : Vector.<Number>; private function testShader():void { shader.data.ab.value = [1.0, 8.0]; shader.data.src.input = testVectorNum; shader.data.src.width = SIZE/400; shader.data.src.height = 100; shaderJob = new ShaderJob(shader, testVectorNum, SIZE / 4, 1); var time : int = getTimer(), i : int = 0; shaderJob.start(true); trace("TEST1 : ", getTimer() - time); } The problem is that I keep getting a error saying: [Fault] exception, information=Error: Error #1000: The system is out of memory. Update: I managed to partially workaround the problem by converting the vector into bitmapData: (Using this technique I still get a speed boost of 3x using Pixel Bender) private function testShader():void { shader.data.ab.value = [1.0, 8.0]; var time : int = getTimer(), i : int = 0; testBitmapData.setVector(testBitmapData.rect, testVectorInt); shader.data.src.input = testBitmapData; shaderJob = new ShaderJob(shader, testBitmapData); shaderJob.start(true); testVectorInt = testBitmapData.getVector(testBitmapData.rect); trace("TEST1 : ", getTimer() - time); }

    Read the article

  • How to create an extensible rope in Box2D?

    - by Thomas
    Let's say I'm trying to create a ninja lowering himself down a rope, or pulling himself back up, all whilst he might be swinging from side to side or hit by objects. Basically like http://ninja.frozenfractal.com/ but with Box2D instead of hacky JavaScript. Ideally I would like to use a rope joint in Box2D that allows me to change the length after construction. The standard Box2D RopeJoint doesn't offer that functionality. I've considered a PulleyJoint, connecting the other end of the "pulley" to an invisible kinematic body that I can control to change the length, but PulleyJoint is more like a rod than a rope: it constrains maximum length, but unlike RopeJoint it constrains the minimum as well. Re-creating a RopeJoint every frame using a new length is rather inefficient, and I'm not even sure it would work properly in the simulation. I could create a "chain" of bodies connected by RotationJoints but that is also less efficient, and less robust. I also wouldn't be able to change the length arbitrarily, but only by adding and removing a whole number of links, and it's not obvious how I would connect the remainder without violating existing joints. This sounds like something that should be straightforward to do. Am I overlooking something?

    Read the article

  • Distributed, Parallel, Fault-tolerant File System

    - by Eddified
    There are so many choices that it's hard to know where to start. My requirements are these: Runs on Linux Most of the files will be between 5-9 MB in size. There will also be a significant number of small-ish jpgs (100px x 100px). All of the files need to be available over http. Redundancy -- ideally it would provide the space efficiency similar to RAID 5 of 75% (in RAID 5 this would be calculated thus: with 4 identical disks, 25% of the space is used for parity = 75% efficent) Must support several petabytes of data scalable runs on commodity hardware In addition, I look for these qualities, though they are not "requirements": Stable, mature file system Lots of momentum and support etc I would like some input as to which file system works best for the given requirements. Some people at my organization are leaning towards MogileFS, but I'm not convinced of the stability and momentum of that project. GlusterFS and Lustre, based on my limited research, appear to be better supported... Thoughts?

    Read the article

  • Collection RemoveAll Extension Method

    - by João Angelo
    I had previously posted a RemoveAll extension method for the Dictionary<K,V> class, now it’s time to have one for the Collection<T> class. The signature is the same as in the corresponding method already available in List<T> and the implementation relies on the RemoveAt method to perform the actual removal of each element. Finally, here’s the code: public static class CollectionExtensions { /// <summary> /// Removes from the target collection all elements that match the specified predicate. /// </summary> /// <typeparam name="T">The type of elements in the target collection.</typeparam> /// <param name="collection">The target collection.</param> /// <param name="match">The predicate used to match elements.</param> /// <exception cref="ArgumentNullException"> /// The target collection is a null reference. /// <br />-or-<br /> /// The match predicate is a null reference. /// </exception> /// <returns>Returns the number of elements removed.</returns> public static int RemoveAll<T>(this Collection<T> collection, Predicate<T> match) { if (collection == null) throw new ArgumentNullException("collection"); if (match == null) throw new ArgumentNullException("match"); int count = 0; for (int i = collection.Count - 1; i >= 0; i--) { if (match(collection[i])) { collection.RemoveAt(i); count++; } } return count; } }

    Read the article

  • Visual studio Real Dark mode (2010,2012,2013)

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/11/02/visual-studio-real-dark-mode-201020122013.aspxWhen Visual studio 2010 released back in 3 year ago I soon show a demo to some people that how Dark mode of Visual studio will be great idea. Soon we got some theme plugin  which make us able to modify the look of visual studio.   http://studiostyl.es/ already provide lots of wonderful color scheme that make you able to modify the theme. These themes are also work in webmatrix 2.  Webmatrix 2 have a plugin for themes that is made by Yishai Galatzer that is awesome for webmatrix 2.   In Visual studio 2012 we got a native dark mode. This means we can configure it without any plugin or requirement of anything. In this post I have a demo to show you how to use Dark mode that is part of Windows 7 (and windows 8 too).   Few months ago I show a problem that webmatrix 2 run slow. it’s run better in windows 7 dark mode. Windows 7 dark mode simply refer to right click > personalize > High contrast theme in bottom of windows. This setting make thing a little bit faster.   When you have set this you have seen that Visual studio doesn’t react good anymore because it’s color scheme is broken now. What you need now is import any theme from http://studiostyl.es/ When you import this this will look good as this.   This is the demo look of Windows 7 phone Express 2010. It will react same for future version as 2012, 2013. Now see your VS react look dark. Everything is dark now. Your Firefox and IE will not run totally in blackish mode but you can use chrome. Chrome have less effect of dark. Now if you benchmark it then you will feel that everything that take a long time in loading now run fast.   Note :- This is experiments. Remember to have settings backup before apply new theme. All thing I do is make my VS run faster. If you have any trouble or idea please comment it.   Thanks for read my post

    Read the article

  • what kind of memory can be categorized as Modified Memory in Resource Monitor

    - by Kavin
    In Windows 7 and Windows 2008 R2, there is a new Resource Monitor that is very useful and powerful to monitor the system. In the Memory section, I see a section called Modified (orange) The official description is: Memory whose contents must be to disk before it can be used for another purpose. But I am still confused. What kinds of memory is Modified? In which case can we say that this number of memory is Modified? Can anyone give me a specific example? Is the following guess correct? When a program want to write something into disk, it actually write the content to an IO buffer, which is in the memory. After OS flush this area of memory into disk, the memory is modified or standby?

    Read the article

  • Is it OK to use dynamic typing to reduce the amount of variables in scope?

    - by missingno
    Often, when I am initializing something I have to use a temporary variable, for example: file_str = "path/to/file" file_file = open(file) or regexp_parts = ['foo', 'bar'] regexp = new RegExp( regexp_parts.join('|') ) However, I like to reduce the scope my variables to the smallest scope possible so there is less places where they can be (mis-)used. For example, I try to use for(var i ...) in C++ so the loop variable is confined to the loop body. In these initialization cases, if I am using a dynamic language, I am then often tempted to reuse the same variable in order to prevent the initial (and now useless) value from being used latter in the function. file = "path/to/file" file = open(file) regexp = ['...', '...'] regexp = new RegExp( regexp.join('|') ) The idea is that by reducing the number of variables in scope I reduce the chances to misuse them. However this sometimes makes the variable names look a little weird, as in the first example, where "file" refers to a "filename". I think perhaps this would be a non issue if I could use non-nested scopes begin scope1 filename = ... begin scope2 file = open(filename) end scope1 //use file here //can't use filename on accident end scope2 but I can't think of any programming language that supports this. What rules of thumb should I use in this situation? When is it best to reuse the variable? When is it best to create an extra variable? What other ways do we solve this scope problem?

    Read the article

  • Producer-consumer pattern with consumer restrictions

    - by Dan
    I have a processing problem that I am thinking is a classic producer-consumer problem with the two added wrinkles that there may be a variable number of producers and there is the restriction that no more than one item per producer may be consumed at any one time. I will generally have 50-100 producers and as many consumers as CPU cores on the server. I want to maximize the throughput of the consumers while ensuring that there are never more than one work item in process from any single producer. This is more complicated than the classic producer-consumer problem which I think assumes a single producer and no restriction on which work items may be in progress at any one time. I think the problem of multiple producers is relatively easily solved by enqueuing all work items on a single work queue protected by a critical section. I think the restriction on simultaneously processing work items from any single producer is harder because I cannot think of any solution that does not require each consumer to notify some kind of work dispatcher that a particular work item has been completed so as to lift the restriction on work items from that producer. In other words, if Consumer2 has just completed WorkItem42 from Producer53, there needs to be some kind of callback or notification from Consumer2 to a work dispatcher to allow the work dispatcher to release the next work item from Producer53 to the next available consumer (whether Consumer2 or otherwise). Am I overlooking something simple here? Is there a known pattern for this problem? I would appreciate any pointers.

    Read the article

  • Excel removing leading leading zeros when displaying CSV data

    - by Velika Kudac
    I have a CSV text file with the following content: "Col1","Col2" "01",A "2",B "10", C When I open it up with Excel, it displays as shown here: Note that Cell 2A attempts to display "01" as a number without a leading 0. When I format rows 2 through 4 as "Text", it changes the display to ...but still the leading "0" is gone. Is there a way to open up a CSV file in XLS and be able to see all of the leading zeros in the file by flipping some option? I do not want to have to retype '01 in every cell that should have a leading zero. Furthermore, using a leading apostrophe necessitates that the changes be saved to a XLS format when CSV is desired. My goal is simply to use Excel to view the actual content of the file as text without Excel trying to do me any formatting favors.

    Read the article

  • Suppress log messages about 3ware disk temperature changes on CentOS?

    - by Stefan Lasiewski
    I have a number of CentOS 5 servers which use 3ware RAID controllers. These servers are bugging my team with messages about minor temperature changes, like this: Jun 8 12:32:39 HOST smartd[1231]: Device: /dev/twa0 [3ware_disk_01], SMART Usage Attribute: 194 Temperature_Celsius changed from 119 to 118 Jun 8 12:32:39 HOST smartd[1231]: Device: /dev/twa0 [3ware_disk_03], SMART Usage Attribute: 194 Temperature_Celsius changed from 122 to 121 How can I suppress these messages? According to man smartd.conf : To disable any of the 3 reports, set the corresponding limit to 0. Trailing zero arguments may be omitted. By default, all temperature reports are disabled (´-W 0´). On my systems, smartd is reporting about temperature changes by default. I tried a manual approach. In /etc/smartd.conf, I have the following: /dev/twa0 -d 3ware,1 -a -W 0 /dev/twa0 -d 3ware,3 -a -W 0 But this still does not suppress the messages. Since these messages show up in /var/log/messages, LogWatch is sending unnecessary emails every night.

    Read the article

  • Configure all hosts, then create a list of the config for all hosts?

    - by AME
    I deployed a huge number of hosts with Ansible - which did work very nice. Each host got its individual settings and configuration. Now I'd like to generate a config file for another system that uses these hosts. For it, I need for every host a part of the generated configuration (the one that configures the database). Here is an example of the situation with two hosts having different configuration and the other system that uses a part of the Ansible-generated configuration: host1 ansible configured dbA host2 ansible configured dbQ The other system: host1 = dbA host2 = dbQ The values are computed differently (dbQ instead of dbB for host2 for example) if it belongs in a different cluster and so on, making it unpractical to just read out host configuration from the host_vars. I believe I would need to iterate over the hosts and let Ansible figure out the computed values for the variables like it would when deploying, but I do not know how to put that result in one template. Please advise :)

    Read the article

  • Approach for monitoring internet backbone traffic volume

    - by Greg Harman
    I'm interested in getting a picture of relative volume across different internet backbones. In particular, I'd like to see how traffic volume over a given route differs over the course of a day or from one day to the next. InternetTrafficReport.com is the closest approximation to this that I've found online, and their approach is to test ping times to a number of key routers from several geographically-dispersed servers. This sounds like one straightforward way to measure, but I don't have several geographically-dispersed servers. Is there a different approach for sampling this type of information from a single server?

    Read the article

< Previous Page | 711 712 713 714 715 716 717 718 719 720 721 722  | Next Page >