Search Results

Search found 12247 results on 490 pages for 'non profit'.

Page 99/490 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • How should I isolate computers with different roles on a network

    - by fishhead
    I work in an industrial plant and we have one network(physical wire) that us used for both office usage and for process systems. The office computers are only used for typical office needs but occasionally do connect to the process computers to obtain information from a sql server or for some other purpose. A new initiative is in the works and is rolling down hill from corporate and that is to standardize how the the computers are used at work and they would be severely locked down and only a standard set of applications will be allowed to execute. one of the requirements is to also have non office computers isolated from the company domain. our non-office computers are a mix of Man-Machine interfaces and sql-servers all running software that non standard. My question is, how can we divorce the control systems computers from the company domain but still have access to the servers from the company domain. thanks

    Read the article

  • How should I isolate computers with different roles on a network

    - by fishhead
    I work in an industrial plant and we have one network(physical wire) that us used for both office usage and for process systems. The office computers are only used for typical office needs but occasionally do connect to the process computers to obtain information from a sql server or for some other purpose. A new initiative is in the works and is rolling down hill from corporate and that is to standardize how the the computers are used at work and they would be severely locked down and only a standard set of applications will be allowed to execute. one of the requirements is to also have non office computers isolated from the company domain. our non-office computers are a mix of Man-Machine interfaces and sql-servers all running software that non standard. My question is, how can we divorce the control systems computers from the company domain but still have access to the servers from the company domain. thanks

    Read the article

  • Hidden characters inserted after pipe (|) followed by a space

    - by nifty
    Very often, on my Mac, when I use the pipe (|) character followed by a space character, an invincible character will be inserted in between. This is especially annoying when using the terminal, as it makes commands invalid. If I type the following in iterm2, I often get the following: ls | cat zsh: command not found:  cat If I hit the up-arrow-key to get my previous command, and then remove and reinsert the space between | and cat, the command will work. When I copy paste the working and non working commands into a file, like this: non-working: ls | cat working: ls | cat and open it in Hex Fiend it shows the following: non-working: ls |¬†cat working: ls | cat I've also experienced the same kind of issue in SublimeText2 using the square brackets ([]) followed by a space. So I don't believe its an issue with iTerm2.

    Read the article

  • How can I force my browser to search Google in English?

    - by Tom Wijsman
    I'm too bored of seeing sites like Google and such show up in my native language, I would rather like them to be in English. Yet, I have to explicitly change the URL to .com and en and that kind of parameters in order for them to show up in English. Can I somehow force this? So, how is Google configured? However, it is set to English on the site itself so it has to be my browser: Then, how does my browser land up on non-english pages, like Google? It usually shows up in non-English when I'm performing a search, which uses: {google:baseURL}search?{google:RLZ}{google:acceptedSuggestion}{google:originalQueryForSuggestion}{google:searchFieldtrialParameter}{google:instantFieldTrialGroupParameter}sourceid=chrome&ie={inputEncoding}&q=%s When performing a search, it fills these variables in with non-english values. How can I tell my browser to fill these in with the English values? My Google Chrome options give preference to English:

    Read the article

  • Bash - Test for multiple users

    - by Mike Purcell
    I am trying to test if current user one of two allowed users to start a process, but I can't seem to get the multi-condition to work correctly: test ($(whoami) != 'mpurcell' || $(whoami) != 'root')) && (echo "Cannot start script as non-ccast user..."; exit 1) Is there a way to test multiple users without have to enter two lines, like this: test $(whoami) != 'mpurcell' && (echo "Cannot start script as non-ccast user..."; exit 1) test $(whoami) != 'root' && (echo "Cannot start script as non-ccast user..."; exit 1)

    Read the article

  • How can I automatically convert all source code files in a folder (recursively) to a single PDF with syntax highlighting?

    - by Bentley4
    I would like to convert source code of a few projects to one printable file to save on a usb and print out easily later. How can I do that? Edit First off I want to clarify that I only want to print the non-hidden files and directories(so no contents of .git e.g.). To get a list of all non-hidden files in non-hidden directories in the current directory you can run the find . -type f ! -regex ".*/\..*" ! -name ".*" command as seen as the answer in this thread. As suggested in that same thread I tried making a pdf file of the files by using the command find . -type f ! -regex ".*/\..*" ! -name ".*" ! -empty -print0 | xargs -0 a2ps -1 --delegate no -P pdf but unfortunately the resulting pdf file is a complete mess.

    Read the article

  • How do I change the canvas size of a PNG with ImageMagick (GraphicsMagick)? (How to pad with transparency?)

    - by Pistos
    Alternatively: How do I take a non-square PNG and "fill out" the "rest" of the image with transparency so that the resulting square image has the original image centered in the square? ULTIMATELY, what I want is to take any image of any GM-supported format of any size, and create a scaled-down PNG (say, 40 pixels maximum for either dimension), with aspect ratio maintained, transparency-padded for non-square original images, AND with an already-prepared 40x40 PNG transparency mask applied. I already know how to scale down and keep aspect ratio; I already have the command for applying my composite. My only missing piece is square-alizing non-square images (padding with transparency). Single command preferred; multi-command chain acceptable. (edit) Extra info: Here's the composite command I'm using: gm composite -compose copyopacity mask.png source-and-target.png source-and-target.png where mask.png has white pixels for what I want to keep of source-and-target.png and transparent pixels for what I want to remove (and become transparent) of source-and-target.png.

    Read the article

  • How is the PHP extensions/modules file structure logic based?

    - by dotpointer
    I'm trying to configure/build PHP 5.3.10 on Linux/Slackware 12 but the extensions appear in the wrong directory when I run make install. In the php.ini file is the extension dir defined: /usr/lib/php/extensions Problem is that when I run "make install" the newly built extensions are copied to a subfolder in extensions directory: /usr/lib/php/extensions/no-debug-non-zts-20090626 What am I supposed to do with this... copy the files down from the no-debug-non-zts-20090626 directory into the extensions directory, create symlinks from extensions to the modules in the no-debug-non-zts-20090626 directory (which will take a lot of time) or what? (I know I can do any of them, but I want to know the correct way...)

    Read the article

  • MVC2 and MVC Futures causing RedirectToAction issues

    - by Darragh
    I've been trying to get the strongly typed version of RedirectToAction from the MVC Futures project to work, but I've been getting no where. Below are the steps I've followed, and the errors I've encountered. Any help is much appreciated. I created a new MVC2 app and changed the About action on the HomeController to redirect to the Index page. Return RedirectToAction("Index") However, I wanted to use the strongly typed extensions, so I downloaded the MVC Futures from CodePlex and added a reference to Microsoft.Web.Mvc to my project. I addded the following "import" statement to the top of HomeContoller.vb Imports Microsoft.Web.Mvc I commented out the above RedirectToAction and added the following line: Return RedirectToAction(Of HomeController)(Function(c) c.Index()) So far, so good. However, I noticed if I uncomment out the first (non Generic) RedirectToAction, it was now causing the following compile error: Error 1 Overload resolution failed because no accessible 'RedirectToAction' can be called with these arguments: Extension method 'Public Function RedirectToAction(Of TController)(action As System.Linq.Expressions.Expression(Of System.Action(Of TController))) As System.Web.Mvc.RedirectToRouteResult' defined in 'Microsoft.Web.Mvc.ControllerExtensions': Data type(s) of the type parameter(s) cannot be inferred from these arguments. Specifying the data type(s) explicitly might correct this error. Extension method 'Public Function RedirectToAction(action As System.Linq.Expressions.Expression(Of System.Action(Of HomeController))) As System.Web.Mvc.RedirectToRouteResult' defined in 'Microsoft.Web.Mvc.ControllerExtensions': Value of type 'String' cannot be converted to 'System.Linq.Expressions.Expression(Of System.Action(Of mvc2test1.HomeController))'. Even though intelli-sense was showing 8 overloads (the original 6 non-generic overloads, plus the 2 new generic overloads from the Futures assembly), it seems when trying to complie the code, the compiler would only 'find' the 2 non-gneneric extension methods from the Futures assessmbly. I thought this might be an issue that I was using conflicting versions of the MVC2 assembly, and the futures assembly, so I added MvcDiaganotics.aspx from the Futures download to my project and everytyhing looked correct: ASP.NET MVC Assembly Information (System.Web.Mvc.dll) Assembly version: ASP.NET MVC 2 RTM (2.0.50217.0) Full name: System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35 Code base: file:///C:/WINDOWS/assembly/GAC_MSIL/System.Web.Mvc/2.0.0.0__31bf3856ad364e35/System.Web.Mvc.dll Deployment: GAC-deployed ASP.NET MVC Futures Assembly Information (Microsoft.Web.Mvc.dll) Assembly version: ASP.NET MVC 2 RTM Futures (2.0.50217.0) Full name: Microsoft.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=null Code base: file:///xxxx/bin/Microsoft.Web.Mvc.DLL Deployment: bin-deployed This is driving me crazy! Becuase I thought this might be some VB issue, I created a new MVC2 project using C# and tried the same as above. I added the following "using" statement to the top of HomeController.cs using Microsoft.Web.Mvc; This time, in the About action method, I could only manage to call the non-generic RedirectToAction by typing the full commmand as follows: return Microsoft.Web.Mvc.ControllerExtensions.RedirectToAction<HomeController>(this, c => c.Index()); Even though I had a "using" statement at the top of the class, if I tried to call the non-generic RedirectToAction as follows: return RedirectToAction<HomeController>(c => c.Index()); I would get the following compile error: Error 1 The non-generic method 'System.Web.Mvc.Controller.RedirectToAction(string)' cannot be used with type arguments What gives? It's not like I'm trying to do anything out of the ordinary. It's a simple vanilla MVC2 project with only a reference to the Futures assembly. I'm hoping that I've missed out something obvious, but I've been scratching my head for too long, so I figured I'd seek some assisstance. If anyone's managed to get this simple scenario working (in VB and/or C#) could they please let me know what, if anything, they did differently? Thanks!

    Read the article

  • To divide the big text into columns.

    - by kalininew
    Problem: There is a big piece of the text: <div class="cont"> <p> Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. Nemo enim ipsam voluptatem, quia voluptas sit, aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos, qui ratione voluptatem sequi nesciunt, neque porro quisquam est, qui dolorem ipsum, quia dolor sit, amet, </p> <p> consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt, ut labore et dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, quis nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut aliquid ex ea commodi consequatur? Quis autem vel eum iure reprehenderit, qui in ea voluptate velit esse, quam nihil molestiae consequatur, vel illum, qui dolorem eum fugiat, quo voluptas nulla pariatur? At vero eos et </p> <p> accusamus et iusto odio dignissimos ducimus, qui blanditiis praesentium voluptatum deleniti atque corrupti, quos dolores et quas molestias excepturi sint, obcaecati cupiditate non provident, similique sunt in culpa, qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam libero tempore, cum soluta nobis est eligendi optio, cumque nihil impedit, quo minus id, quod maxime placeat, </p> <p> facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Temporibus autem quibusdam et aut officiis debitis aut rerum necessitatibus saepe eveniet, ut et voluptates repudiandae sint et molestiae non recusandae. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. </p> <p> Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. Nemo enim ipsam voluptatem, quia voluptas sit, aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos, qui ratione voluptatem sequi nesciunt, neque porro quisquam est, qui dolorem ipsum, quia dolor sit, amet, </p> <p> consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt, ut labore et dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, quis nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut aliquid ex ea commodi consequatur? Quis autem vel eum iure reprehenderit, qui in ea voluptate velit esse, quam nihil molestiae consequatur, vel illum, qui dolorem eum fugiat, quo voluptas nulla pariatur? At vero eos et </p> <p> accusamus et iusto odio dignissimos ducimus, qui blanditiis praesentium voluptatum deleniti atque corrupti, quos dolores et quas molestias excepturi sint, obcaecati cupiditate non provident, similique sunt in culpa, qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam libero tempore, cum soluta nobis est eligendi optio, cumque nihil impedit, quo minus id, quod maxime placeat, </p> <p> facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Temporibus autem quibusdam et aut officiis debitis aut rerum necessitatibus saepe eveniet, ut et voluptates repudiandae sint et molestiae non recusandae. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. </p> </div> It is necessary: To divide it on two columns. On page it should be divided on two about identical (on height) columns. If it is possible: at change of the sizes of the container of the text, a column should remain identical height. Whether probably to set number "n" - on how many columns to divide the big piece of the text. That is to divide the text into any number of columns. Is available: php, xslt, css, pure javascript (without jQuey). What tool is better for using? As it to make, that the decision was ?ross browser compatible.

    Read the article

  • Gone fishing, because i like it

    - by NewDi
    Integer orci risus, vestibulum et pharetra in, accumsan sit amet diam. Praesent rutrum faucibus tellus, at ullamcorper ligula vestibulum non. Nam felis tortor, tempor nec tincidunt vel, porta a nisi. Cras dictum, orci vitae varius feugiat, lorem nisi euismod nisi, vel sodales ante ipsum ut sapien. Praesent varius, ligula sit amet laoreet mattis, nulla nisi tincidunt urna, at placerat libero leo id mauris. Fusce pretium facilisis quam, nec vulputate nulla faucibus non. Donec sodales iaculis dui in gravida. Sed consequat scelerisque eros, quis pulvinar ipsum auctor ac. Sed odio felis, euismod at tincidunt in, sagittis vel lacus. Praesent vitae nisi non augue fringilla ornare. Phasellus interdum tellus quis elit blandit mattis eu id sapien. Duis at augue libero, quis mattis lorem. Morbi ut mauris ligula, nec dapibus quam. Suspendisse et ipsum enim. Suspendisse vel erat lorem. Sed id velit risus, porttitor pharetra urna. Fusce vestibulum elementum turpis in vehicula. Nullam eu nulla ipsum. Ut viverra diam quis urna congue in ullamcorper massa hendrerit. Curabitur convallis tempor ipsum et condimentum. Suspendisse eget enim tellus. Cras id sapien elit, sit amet rutrum tortor. Quisque ac odio tortor, et vestibulum turpis. Integer et magna in erat placerat placerat. Proin ac dapibus leo. Sed fringilla cursus quam quis ornare. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; In nec diam sapien. Mauris ac enim dolor, a fringilla lorem. Nulla facilisi. Fusce bibendum quam vitae lorem placerat imperdiet. Phasellus molestie quam vehicula dolor auctor a dapibus lorem rhoncus. Fusce non arcu augue. Aliquam mollis placerat molestie. Duis quis diam vel erat porta bibendum vel id lacus. Quisque nec purus id magna imperdiet adipiscing non dictum sem. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Donec enim metus, tincidunt quis eleifend at, tristique in sem. In elit elit, lobortis cursus lobortis eu, scelerisque vel mi. Fusce at leo ac mi porta feugiat. Etiam nec facilisis sem. Pellentesque bibendum, felis sit amet vehicula convallis, libero dolor venenatis sapien, in pretium nulla odio quis dolor. Praesent mollis porttitor quam, in elementum odio condimentum at. Sed elit odio, aliquam nec molestie in, tempor eget felis. Suspendisse lobortis magna lorem. Suspendisse nisl risus, sollicitudin non imperdiet eu, vestibulum sit amet elit. Praesent vulputate molestie ante, sit amet sagittis enim egestas a. Vestibulum ultrices iaculis dolor eget pharetra. Nam purus velit, sodales eu facilisis at, imperdiet at mauris. Nulla et enim vitae nulla luctus gravida a a dui. Pellentesque sollicitudin, libero nec scelerisque bibendum, ipsum tortor vehicula ipsum, at ultricies massa nisi in nibh. Suspendisse vel pharetra odio. Fusce neque sapien, commodo in interdum nec, scelerisque vitae nunc. Nunc eu sapien ac justo placerat cursus in eu felis. Maecenas ultrices vestibulum iaculis. Proin vel risus erat, nec consectetur turpis. Etiam odio erat, placerat quis porta vel, euismod vel nibh. Nulla tristique molestie lacinia. Pellentesque molestie enim vel enim condimentum eu imperdiet nulla pellentesque. Ut arcu lectus, sodales eget varius ac, pharetra quis mauris. Quisque odio est, posuere vel auctor ut, elementum nec.

    Read the article

  • C++/msvc6 application crashes due to heap corruption, any hints?

    - by David Alfonso
    Hello all, let me say first that I'm writing this question after months of trying to find out the root of a crash happening in our application. I'll try to detail as much as possible what I've already found out about it. About the application It runs on Windows XP Professional SP2. It's built with Microsoft Visual C++ 6.0 with Service Pack 6. It's MFC based. It uses several external dlls (e.g. Xerces, ZLib or ACE). It has high performance requirements. It does a lot of network and hard disk I/O, but it's also cpu intensive. It has an exception handling mechanism which generates a minidump when an unhandled exception occurs. Facts about the crash It only happens on multiprocessor/multicore machines and under heavy loads of work. It happens at random (neither we nor our client have found a pattern yet). We cannot reproduce the crash on our testing lab. It only happens on some production systems (but always in multicore machines) It always ends up crashing at the same point, although the complete stack is not always the same. Let me add the stack of the crashing thread (obtained using WinDbg, sorry we don't have symbols) ChildEBP RetAddr Args to Child WARNING: Stack unwind information not available. Following frames may be wrong. 030af6c8 7c9206eb 77bfc3c9 01a80000 00224bc3 MyApplication+0x2a85b9 030af960 7c91e9c0 7c92901b 00000ab4 00000000 ntdll!RtlAllocateHeap+0xeac (FPO: [Non-Fpo]) 030af98c 7c9205c8 00000001 00000000 00000000 ntdll!ZwWaitForSingleObject+0xc (FPO: [3,0,0]) 030af9c0 7c920551 01a80898 7c92056d 313adfb0 ntdll!RtlpFreeToHeapLookaside+0x22 (FPO: [2,0,4]) 030afa8c 4ba3ae96 000307da 00130005 00040012 ntdll!RtlFreeHeap+0x1e9 (FPO: [Non-Fpo]) 030afacc 77bfc2e3 0214e384 3087c8d8 02151030 0x4ba3ae96 030afb00 7c91e306 7c80bfc1 00000948 00000001 msvcrt!free+0xc8 (FPO: [Non-Fpo]) 030afb20 0042965b 030afcc0 0214d780 02151218 ntdll!ZwReleaseSemaphore+0xc (FPO: [3,0,0]) 030afb7c 7c9206eb 02e6c471 02ea0000 00000008 MyApplication+0x2965b 030afe60 7c9205c8 02151248 030aff38 7c920551 ntdll!RtlAllocateHeap+0xeac (FPO: [Non-Fpo]) 030afe74 7c92056d 0210bfb8 02151250 02151250 ntdll!RtlpFreeToHeapLookaside+0x22 (FPO: [2,0,4]) 030aff38 77bfc2de 01a80000 00000000 77bfc2e3 ntdll!RtlFreeHeap+0x647 (FPO: [Non-Fpo]) 7c92056d c5ffffff ce7c94be ff7c94be 00ffffff msvcrt!free+0xc3 (FPO: [Non-Fpo]) 7c920575 ff7c94be 00ffffff 12000000 907c94be 0xc5ffffff 7c920579 00ffffff 12000000 907c94be 90909090 0xff7c94be *** WARNING: Unable to verify checksum for xerces-c_2_7.dll *** ERROR: Symbol file could not be found. Defaulted to export symbols for xerces-c_2_7.dll - 7c92057d 12000000 907c94be 90909090 8b55ff8b MyApplication+0xbfffff 7c920581 907c94be 90909090 8b55ff8b 08458bec xerces_c_2_7 7c920585 90909090 8b55ff8b 08458bec 04408b66 0x907c94be 7c920589 8b55ff8b 08458bec 04408b66 0004c25d 0x90909090 7c92058d 08458bec 04408b66 0004c25d 90909090 0x8b55ff8b The address MyApplication+0x2a85b9 corresponds to a call to erase() of a std::list. What I have tried so far Reviewing all the code related to the point where the crash ends happening. Trying to enable pageheap on our testing lab though nothing useful has been found by now. We have substituted the std::list for a C array and then it crashes in other part of the code (although it is related code, it's not in the code where the old list resided). Coincidentally, now it crashes in another erase, though this time of a std::multiset. Let me copy the stack contained in the dump: ntdll.dll!_RtlpCoalesceFreeBlocks@16() + 0x124e bytes ntdll.dll!_RtlFreeHeap@12() + 0x91f bytes msvcrt.dll!_free() + 0xc3 bytes MyApplication.exe!006a4fda() [Frames below may be incorrect and/or missing, no symbols loaded for MyApplication.exe] MyApplication.exe!0069f305() ntdll.dll!_NtFreeVirtualMemory@16() + 0xc bytes ntdll.dll!_RtlpSecMemFreeVirtualMemory@16() + 0x1b bytes ntdll.dll!_ZwWaitForSingleObject@12() + 0xc bytes ntdll.dll!_RtlpFreeToHeapLookaside@8() + 0x26 bytes ntdll.dll!_RtlFreeHeap@12() + 0x114 bytes msvcrt.dll!_free() + 0xc3 bytes c5ffffff() Possible solutions (that I'm aware of) which cannot be applied "Migrate the application to a newer compiler": We are working on this but It's not a solution at the moment. "Enable pageheap (normal or full)": We can't enable pageheap on production machines as this affects performance heavily. I think that's all I remember now, if I have forgotten something I'll add it asap. If you can give me some hint or propose some possible solution, don't hesitate to answer! Thank you in advance for your time and advice.

    Read the article

  • Thread Synchronization and Synchronization Primitives

    When considering synchronization in an application, the decision truly depends on what the application and its worker threads are going to do. I would use synchronization if two or more threads could possibly manipulate the same instance of an object at the same time. An example of this in C# can be demonstrated through the use of storing data in a static object. A static object is initialized once per application and the data within the object can be accessed by all threads. I would use the synchronization primitives to prevent any data from being manipulated by multiple threads simultaneously. This would reduce any data corruption from occurring within the object. On the other hand if all the threads used non static objects and were independent of the other tasks there would be no need to use synchronization. Synchronization Primitives in C#: Basic Blocking Locking Signaling Non-Blocking Synchronization Constructs The Basic Blocking methods include Sleep, Join, and Task.Wait.  These methods force threads to wait until other threads have completed. In addition, these methods can also force a thread to wait a set amount of time before continuing to work.   The Locking primitive prevents a thread from entering a critical section of code while another thread is in the same critical section.  If another thread attempts to enter a locked code, it will wait, until the code block is released. The Signaling primitive allows a thread to temporarily pause work until receiving a notification from another thread that it is ok to continue working. The Signaling primitive removes the need for polling.The Non-Blocking Synchronization Constructs protect access to a common field by calling upon processor primitives.

    Read the article

  • Node.js Adventure - Node.js on Windows

    - by Shaun
    Two weeks ago I had had a talk with Wang Tao, a C# MVP in China who is currently running his startup company and product named worktile. He asked me to figure out a synchronization solution which helps his product in the future. And he preferred me implementing the service in Node.js, since his worktile is written in Node.js. Even though I have some experience in ASP.NET MVC, HTML, CSS and JavaScript, I don’t think I’m an expert of JavaScript. In fact I’m very new to it. So it scared me a bit when he asked me to use Node.js. But after about one week investigate I have to say Node.js is very easy to learn, use and deploy, even if you have very limited JavaScript skill. And I think I became love Node.js. Hence I decided to have a series named “Node.js Adventure”, where I will demonstrate my story of learning and using Node.js in Windows and Windows Azure. And this is the first one.   (Brief) Introduction of Node.js I don’t want to have a fully detailed introduction of Node.js. There are many resource on the internet we can find. But the best one is its homepage. Node.js was created by Ryan Dahl, sponsored by Joyent. It’s consist of about 80% C/C++ for core and 20% JavaScript for API. It utilizes CommonJS as the module system which we will explain later. The official definition of Node.js is Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. First of all, Node.js utilizes JavaScript as its development language and runs on top of V8 engine, which is being used by Chrome. It brings JavaScript, a client-side language into the backend service world. So many people said, even though not that actually, “Node.js is a server side JavaScript”. Additionally, Node.js uses an event-driven, non-blocking IO model. This means in Node.js there’s no way to block currently working thread. Every operation in Node.js executed asynchronously. This is a huge benefit especially if our code needs IO operations such as reading disks, connect to database, consuming web service, etc.. Unlike IIS or Apache, Node.js doesn’t utilize the multi-thread model. In Node.js there’s only one working thread serves all users requests and resources response, as the ST star in the figure below. And there is a POSIX async threads pool in Node.js which contains many async threads (AT stars) for IO operations. When a user have an IO request, the ST serves it but it will not do the IO operation. Instead the ST will go to the POSIX async threads pool to pick up an AT, pass this operation to it, and then back to serve any other requests. The AT will actually do the IO operation asynchronously. Assuming before the AT complete the IO operation there is another user comes. The ST will serve this new user request, pick up another AT from the POSIX and then back. If the previous AT finished the IO operation it will take the result back and wait for the ST to serve. ST will take the response and return the AT to POSIX, and then response to the user. And if the second AT finished its job, the ST will response back to the second user in the same way. As you can see, in Node.js there’s only one thread serve clients’ requests and POSIX results. This thread looping between the users and POSIX and pass the data back and forth. The async jobs will be handled by POSIX. This is the event-driven non-blocking IO model. The performance of is model is much better than the multi-threaded blocking model. For example, Apache is built in multi-threaded blocking model while Nginx is in event-driven non-blocking mode. Below is the performance comparison between them. And below is the memory usage comparison between them. These charts are captured from the video NodeJS Basics: An Introductory Training, which presented at Cloud Foundry Developer Advocate.   Node.js on Windows To execute Node.js application on windows is very simple. First of you we need to download the latest Node.js platform from its website. After installed, it will register its folder into system path variant so that we can execute Node.js at anywhere. To confirm the Node.js installation, just open up a command windows and type “node”, then it will show the Node.js console. As you can see this is a JavaScript interactive console. We can type some simple JavaScript code and command here. To run a Node.js JavaScript application, just specify the source code file name as the argument of the “node” command. For example, let’s create a Node.js source code file named “helloworld.js”. Then copy a sample code from Node.js website. 1: var http = require("http"); 2:  3: http.createServer(function (req, res) { 4: res.writeHead(200, {"Content-Type": "text/plain"}); 5: res.end("Hello World\n"); 6: }).listen(1337, "127.0.0.1"); 7:  8: console.log("Server running at http://127.0.0.1:1337/"); This code will create a web server, listening on 1337 port and return “Hello World” when any requests come. Run it in the command windows. Then open a browser and navigate to http://localhost:1337/. As you can see, when using Node.js we are not creating a web application. In fact we are likely creating a web server. We need to deal with request, response and the related headers, status code, etc.. And this is one of the benefit of using Node.js, lightweight and straightforward. But creating a website from scratch again and again is not acceptable. The good news is that, Node.js utilizes CommonJS as its module system, so that we can leverage some modules to simplify our job. And furthermore, there are about ten thousand of modules available n the internet, which covers almost all areas in server side application development.   NPM and Node.js Modules Node.js utilizes CommonJS as its module system. A module is a set of JavaScript files. In Node.js if we have an entry file named “index.js”, then all modules it needs will be located at the “node_modules” folder. And in the “index.js” we can import modules by specifying the module name. For example, in the code we’ve just created, we imported a module named “http”, which is a build-in module installed alone with Node.js. So that we can use the code in this “http” module. Besides the build-in modules there are many modules available at the NPM website. Thousands of developers are contributing and downloading modules at this website. Hence this is another benefit of using Node.js. There are many modules we can use, and the numbers of modules increased very fast, and also we can publish our modules to the community. When I wrote this post, there are totally 14,608 modules at NPN and about 10 thousand downloads per day. Install a module is very simple. Let’s back to our command windows and input the command “npm install express”. This command will install a module named “express”, which is a MVC framework on top of Node.js. And let’s create another JavaScript file named “helloweb.js” and copy the code below in it. I imported the “express” module. And then when the user browse the home page it will response a text. If the incoming URL matches “/Echo/:value” which the “value” is what the user specified, it will pass it back with the current date time in JSON format. And finally my website was listening at 12345 port. 1: var express = require("express"); 2: var app = express(); 3:  4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7:  8: app.get("/Echo/:value", function(req, res) { 9: var value = req.params.value; 10: res.json({ 11: "Value" : value, 12: "Time" : new Date() 13: }); 14: }); 15:  16: console.log("Web application opened."); 17: app.listen(12345); For more information and API about the “express”, please have a look here. Start our application from the command window by command “node helloweb.js”, and then navigate to the home page we can see the response in the browser. And if we go to, for example http://localhost:12345/Echo/Hello Shaun, we can see the JSON result. The “express” module is very populate in NPM. It makes the job simple when we need to build a MVC website. There are many modules very useful in NPM. - underscore: A utility module covers many common functionalities such as for each, map, reduce, select, etc.. - request: A very simple HTT request client. - async: Library for coordinate async operations. - wind: Library which enable us to control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps.   Node.js and IIS I demonstrated how to run the Node.js application from console. Since we are in Windows another common requirement would be, “can I host Node.js in IIS?” The answer is “Yes”. Tomasz Janczuk created a project IISNode at his GitHub space we can find here. And Scott Hanselman had published a blog post introduced about it.   Summary In this post I provided a very brief introduction of Node.js, includes it official definition, architecture and how it implement the event-driven non-blocking model. And then I described how to install and run a Node.js application on windows console. I also described the Node.js module system and NPM command. At the end I referred some links about IISNode, an IIS extension that allows Node.js application runs on IIS. Node.js became a very popular server side application platform especially in this year. By leveraging its non-blocking IO model and async feature it’s very useful for us to build a highly scalable, asynchronously service. I think Node.js will be used widely in the cloud application development in the near future.   In the next post I will explain how to use SQL Server from Node.js.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Davicom Semiconductor, Inc. 21x4x DEC-Tulip not detected by Wireshark but IP operational

    - by deepsix86
    Recently flipped to Ubuntu 11.10 on a Dell 4300 (Intel). Getting IP address and no issues (ping/surf) but Wireshark unable to detect eth0 interface. I see references in forums to blacklist tulip but looks like I am running dmfe. Not sure if the blacklist is required and where to go from here. Maybe Driver update? Got a little lost looking in that area. Some h/w details below (IP/MAC/HOSTNAME removed) Linux xxxxxx 3.0.0-17-generic #30-Ubuntu SMP Thu Mar 8 17:34:21 UTC 2012 i686 i686 i386 GNU/Linux network-admin (HOSTS TAB) does not list eth0, only loopback and bunch of IPv6 interfaces ifconfig eth0 Link encap:Ethernet HWaddr xxxxxxxx inet addr:192.168.x.xx Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: xxxxxxxxxxx 64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:36662 errors:0 dropped:1 overruns:0 frame:0 TX packets:24975 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:42115779 (42.1 MB) TX bytes:3056435 (3.0 MB) Interrupt:18 Base address:0xe800 lspci 02:09.0 Ethernet controller: Davicom Semiconductor, Inc. 21x4x DEC-Tulip compatible 10/100 Ethernet (rev 31) Subsystem: Device 4554:434e Flags: bus master, medium devsel, latency 64, IRQ 18 I/O ports at e800 [size=256] Memory at fe1ffc00 (32-bit, non-prefetchable) [size=256] Expansion ROM at fe200000 [disabled] [size=256K] Capabilities: [50] Power Management version 2 Kernel driver in use: dmfe Kernel modules: dmfe hwinfo --netcard 20: PCI 209.0: 0200 Ethernet controller [Created at pci.318] Unique ID: rBUF.0NgK5ZS9c0D Parent ID: 6NW+.siohrLUzzI4 SysFS ID: /devices/pci0000:00/0000:00:1e.0/0000:02:09.0 SysFS BusID: 0000:02:09.0 Hardware Class: network Model: "Davicom 21x4x DEC-Tulip compatible 10/100 Ethernet" Vendor: pci 0x1282 "Davicom Semiconductor, Inc." Device: pci 0x9102 "21x4x DEC-Tulip compatible 10/100 Ethernet" SubVendor: pci 0x4554 SubDevice: pci 0x434e Revision: 0x31 Driver: "dmfe" Driver Modules: "dmfe" Device File: eth0 I/O Ports: 0xe800-0xe8ff (rw) Memory Range: 0xfe1ffc00-0xfe1ffcff (rw,non-prefetchable) Memory Range: 0xfe200000-0xfe23ffff (ro,non-prefetchable,disabled) IRQ: 18 (61379 events) HW Address: 00:08:a1:01:35:70 Link detected: yes Module Alias: "pci:v00001282d00009102sv00004554sd0000434Ebc02sc00i00" Driver Info #0: Driver Status: dmfe is active Driver Activation Cmd: "modprobe dmfe" Config Status: cfg=new, avail=yes, need=no, active=unknown Attached to: #11 (PCI bridge)

    Read the article

  • Installazione ATI Mobility Radeon HD 5650 su Ubuntu 11.10

    - by Antonio
    Salve a tutti, possiedo un portatile HP Pavillion dv6 3110 con scheda video dedicata ATI Mobility Radeon HD 5650 da 1 Giga ed ho installato da poco Ubuntu 11.10 Versione 64 bit. Ho seguito molte guide su internet per installare i driver per la mia scheda video ma nessuna ha dato esito positivo. Nella finestra "driver aggiuntivi" sono riuscito ad installare i "Driver grafici fglrx proprietari ATI/AMD" ma dopo il riavvio non riesco ad utilizzare correttamente la scheda video. Mentre i "Driver grafici fglrx proprietari ATI/AMD (aggiornamenti post-release)" non me li fa proprio installare segnalando un errore che riporto di seguito "L'installazione di questo driver non è riuscita.Consultare i file di registro per maggiori informazioni: /var/log/jockey.log". Ho pensato allora di scaricare direttamente dal sito di AMD gli ultimi driver rilasciati attraverso il pacchetto "amd-driver-installer-12-3-x86.x86_64.run", l'ho lanciato, ho seguito il wizard di installazione, l'installazione viene completata, digito "sudo aticonfig --initial" per la configurazione iniziale, ma al riavvio del pc appaiono soltanto scritte su schermo nero con una serie di "OK" e qualche "FAIL". Ho provato questa procedura anche per le versioni precedenti dei driver, ma il risultato è sempre lo stesso. Sono disperato. Riuscirò mai ad utilizzare la mia scheda video? Vi incollo per completezza ciò che mi appare all'esecuzione del comando "lspci -nn | grep VGA" per visualizzare i processori grafici presenti sul mio pc: 00:02.0 VGA compatible controller [0300]: Intel Corporation Core Processor Integrated Graphics Controller [8086:0046] (rev 02) 01:00.0 VGA compatible controller [0300]: ATI Technologies Inc Madison [AMD Radeon HD 5000M Series] [1002:68c1] Grazie anticipatamente a coloro che potranno aiutarmi. Cordiali Saluti Antonio Giordano

    Read the article

  • Modular programming is the method of programming small task or programs

    Modular programming is the method of programming small task or sub-programs that can be arranged in multiple variations to perform desired results. This methodology is great for preventing errors due to the fact that each task executes a specific process and can be debugged individually or within a larger program when combined with other tasks or sub programs. C# is a great example of how to implement modular programming because it allows for functions, methods, classes and objects to be use to create smaller sub programs. A program can be built from smaller pieces of code which saves development time and reduces the chance of errors because it is easier to test a small class or function for a simple solutions compared to testing a full program which has layers and layers of small programs working together.Yes, it is possible to write the same program using modular and non modular programming, but it is not recommend it. When you deal with non modular programs, they tend to contain a lot of spaghetti code which can be a pain to develop and not to mention debug especially if you did not write the code. In addition, in my experience they seem to have a lot more hidden bugs which waste debugging and development time. Modular programming methodology in comparision to non-mondular should be used when ever possible due to the use of small components. These small components allow business logic to be reused and is easier to maintain. From the user’s view point, they cannot really tell if the code is modular or not with today’s computers.

    Read the article

  • ASP.NET ViewState Tips and Tricks #1

    - by João Angelo
    In User Controls or Custom Controls DO NOT use ViewState to store non public properties. Persisting non public properties in ViewState results in loss of functionality if the Page hosting the controls has ViewState disabled since it can no longer reset values of non public properties on page load. Example: public class ExampleControl : WebControl { private const string PublicViewStateKey = "Example_Public"; private const string NonPublicViewStateKey = "Example_NonPublic"; // DO public int Public { get { object o = this.ViewState[PublicViewStateKey]; if (o == null) return default(int); return (int)o; } set { this.ViewState[PublicViewStateKey] = value; } } // DO NOT private int NonPublic { get { object o = this.ViewState[NonPublicViewStateKey]; if (o == null) return default(int); return (int)o; } set { this.ViewState[NonPublicViewStateKey] = value; } } } // Page with ViewState disabled public partial class ExamplePage : Page { protected override void OnLoad(EventArgs e) { base.OnLoad(e); this.Example.Public = 10; // Restore Public value this.Example.NonPublic = 20; // Compile Error! } }

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • Spotlight: How Scandinavia's Largest Nuclear Power Plant Increased Productivity and Reduced Costs wi

    - by [email protected]
    Ringhals nuclear power plant, which is part of the Vattenfall Group, is located about 60 km south-west of the beautiful coastal city of Gothenburg in Sweden. A deep concern to reduce environmental impact coupled with an effort to increase plant safety and operational efficiency have led to a recent surge in investments and initiatives around plant modification and plant optimization at Ringhals. A multitude of challenges were faced by the users in various groups that were involved in these projects. First, it was very difficult for users to easily access complex and layered asset and engineering information, which was critical to increased productivity and completing projects on time. Moreover, the 20 or so different solutions that were being used to view various document formats, not only resulted in collaboration complexity but also escalated IT administration costs and woes. Finally, there was a considerable non-engineering community comprising non-CAD specialists that needed easy access to plant data in an effort to minimize engineering disruption. Oracle's AutoVue significantly simplified the ability to efficiently view and use digital asset information by providing a standardized visualization solution for the enterprise. The key benefits achieved by Ringhals include: Increased productivity of plant optimization and plant modification by 3% Saved around $ 500 K annually Cut IT maintenance costs by 50% by using a single solution Reduced engineering disruption by allowing non-CAD users easy access to digital plant data The complete case-study can be found here

    Read the article

  • Language redirect affecting pagerank and search listing?

    - by Janoszen
    Preface We have a number of sites that use the same redirect mechanism across the board. We recently transitioned one site from non-localised to localised and detected that the Google+ integration doesn't show up on the search results any more AND the PageRank is gone from 2 to 0. How the redirect works If the UA sends a cookie (e.g. lang=en), redirect the user to /language (e.g. /en) If the UA is a bot (.*bot.*), redirect to /en If the Accept-Language header contains a usable, non-English language, redirect to /language (English is the default on many browsers in non-English regions) If there is a valid GeoIP lookup and the detected region is linked to a supported language, redirect to /language Redirect to /en We do of course on all pages have the proper markup to indicate the alternate language: <link hreflang="de" href="/de" rel="alternate" /> As far as we can tell, we follow all publicly available guidelines from Google, so we are a bit at odds if this is a bug in Google or we have done something wrong. Question Does not having content on the root URL of a domain adversely affect search engine rankings and if yes, how does one implement a proper language redirection?

    Read the article

  • Get to Know a Candidate (6 of 25): Jill Stein&ndash;Green Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Stein is a physician with degrees from Harvard College and Harvard Medical School.  She serves on the boards of Greater Boston Physicians for Social Responsibility and MassVoters for Fair Elections, and has been active with the Massachusetts Coalition for Healthy Communities Jill Stein advocates a "Green New Deal" in which renewable energy jobs would be created to address climate change and environmental issues with the objective of employing "every American willing and able to work". Citing the research of Dr. Phillip Harvey, Professor of Law & Economics at Rutgers University, as evidence of the successful economic effects of the 1930s' New Deal projects, Stein would fund the plan with a 30% reduction in the U.S. military budget, returning US troops home, and increasing taxes on areas such as capital gains, offshore tax havens and multimillion dollar real estate. Stein plans on impacting what she sees as a growing convergence of environmental crises in water, soil, fisheries and forests, through the creation of sustainable infrastructure based in clean renewable energy generation and sustainable communities principles such as increasing intra-city mass transit and inter-city railroads, creating 'complete streets' that safely encourage bike and pedestrian traffic and regional food systems based on sustainable organic agriculture The Green Party of the United States was founded in 1991 as a voluntary association of state green parties. With its founding, the Green Party of the United States became the primary national Green organization in the United States, eclipsing the Greens/Green Party USA, which emphasized non-electoral movement building. The Green Party of the United States of America emphasizes environmentalism, non-hierarchical participatory democracy, social justice, respect for diversity, peace and nonviolence. Their "Ten Key Values," which are described as non-authoritative guiding principles, are as follows: Grassroots democracy Social justice and equal opportunity Ecological wisdom Nonviolence Decentralization Community-based economics Feminism and gender equality Respect for diversity Personal and global responsibility Future focus and sustainability The Green Party does not accept donations from corporations. Thus, the party's platforms and rhetoric critique any corporate influence and control over government, media, and American society at large. Stein has access to 403 electoral votes and is a write-in candidate in GA, IN, and MS Learn more about Jill Stein and Green Party on Wikipedia.

    Read the article

  • Thinking skills to be a good programmer

    - by Paul
    I have been programming for last 15 years with non-CS degree. Main reason I got into programming was that I liked to learn new things and apply them to my work. And I was able to find and fix programming errors and their causes faster than others. But I never find myself a a guru or an expert, maybe due to my non-CS major. And when I saw great programmers, I observed they are very good, much better than me of course, at solving problems. One skill I found good in my mid-career is thinking of requirements and tasks in a reverse order and in abstract. In that way, I can see what is really required for me to do without detail and can quickly find parts of solution that already exist. So I wonder if there are other thinking skills to be a good programmer. I've followed Q&As below and actually read some of books recommended there. But I couldn't really pickup good methods directly applicable for my programming work. What non-programming books should a programmer read to help develop programming/thinking skills? Skills and habits to develop to be good at programming (I'm a newbie)

    Read the article

  • NoSQL

    - by NoReasoning
    Last night, (Tuesday, June 28), at the KC .NET User group meeting, George Westwater gave a terrific presentation on NoSQL. The best way to define it (the best way is to see George explain it, and he says he will record his presentation and make it available through his blog – link above)  is databases  that does not use relational technology. And his point, and this is true – I have been around awhile – is that non-relational databases have been used for over 50 years in the business. He points out that Wall Street firms have been using non-relational technology ever since they started using computers. IBM still fully supports IMS, now in version 11 (12 is in beta), because these firms are still using this product and will continue to do so for a long time. Of course, like a lot of computer business technology, there are a lot of new NoSQL products available these days, simply as a reaction to the problems of scaling relational databases for internet use. As a result, it almost looks as though NoSQL is something new. And there are a lot, I mean a LOT, I mean a L-O-T , of new products out there for this technology. The best resource to cover all of these products is http://nosql-database.org/, which has a huge listing of what is available. My interest in the subject is primarily due to my interest in Windows Azure and the fact that Windows Azure storage is all non-relational, even the table storage. It is very fascinating and most of all, far cheaper than using SQL Azure for storage in the “cloud."

    Read the article

  • Blank screen during boot after clean Ubuntu 11.10 install (Intel N10 graphics)

    - by Coen
    After a clean install of Ubuntu 11.10 on my Asus eee PC 1005p, Ubuntu seems to boot correctly, except for initialization of the LCD screen. What I observe: I choose Ubuntu 11.10 in the GRUB 2 menu A blank screen with a blinking cursor in the top left of the screen, for 15-20 seconds. The ubuntu logo with 5 red dots in the center of the screen, for 1 second. The LCD screen is entirely blank The startup sound plays (Ubuntu is configured to auto-login) Still, the LCD screen is entirely blank. When I press Fn-F8 (the switch between LCD screen and external VGA), the LCD screen shows my desktop correctly and everything seems to work fine. Except for the adjust contrast buttons (Fn-F5 and Fn-F6), these seem to cycle through random brightness modes. Something like: 0% - 50% - 20% - 0% - 20% - 0% Any ideas what's causing this or how to solve this? coen@elpicu:~$ lspci -v 00:02.0 VGA compatible controller: Intel Corporation N10 Family Integrated Graphics Controller (prog-if 00 [VGA controller]) Subsystem: ASUSTeK Computer Inc. Device 83ac Flags: bus master, fast devsel, latency 0, IRQ 44 Memory at f7e00000 (32-bit, non-prefetchable) [size=512K] I/O ports at dc00 [size=8] Memory at d0000000 (32-bit, prefetchable) [size=256M] Memory at f7d00000 (32-bit, non-prefetchable) [size=1M] Expansion ROM at <unassigned> [disabled] Capabilities: <access denied> Kernel driver in use: i915 Kernel modules: i915 00:02.1 Display controller: Intel Corporation N10 Family Integrated Graphics Controller Subsystem: ASUSTeK Computer Inc. Device 83ac Flags: bus master, fast devsel, latency 0 Memory at f7e80000 (32-bit, non-prefetchable) [size=512K] Capabilities: <access denied>

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >