Search Results

Search found 89681 results on 3588 pages for 'cross server'.

Page 686/3588 | < Previous Page | 682 683 684 685 686 687 688 689 690 691 692 693  | Next Page >

  • Possible problems in a team of programmers [on hold]

    - by John
    I am a "one man team" ASP.NET C#, SQL, HTML, JQuery programmer that wants to split workload with two other guys. Since I never actually thought of possible issue in a team of programmer, there are actually quite a few that came to my mind. delegating tasks (who works on what which is also very much related to security). I found Team Foundation Service could be helpful with this problem and started reading about it. Are there any alternatives? security (do now want for original code to be reused outside the project) How to prevent programmers from having access to all parts of code, and how to prevent them from using that code outside of project? Is trust or contract the only way?

    Read the article

  • Synchronise Database between servers via php [closed]

    - by Emmanuel
    Hi Guys, I'm needing to synchronise two mysql databases between different servers on a regular basis, by a client-initiated interface. I've been doing it by remote MYSQL connection, and adding the IP of the servers to the whitelist for MYSQL remote connections. Problem is however, that the client has a dynamic IP, so as soon as it changes they can no longer sync. So I'm trying to find an alternative way of synchronising the two databases via some sort of secure php script.

    Read the article

  • IIS 7 - floats returning with commas instead of periods

    - by cc0
    I'm having trouble reading rows with float values, because these rows return for example 12,34 instead of 12.34 as it should. I suspect this is because both my IIS and SQL server is on a Norwegian Windows Server 2008. So I went to the regional settings and customized the default decimal symbol, then restarted my servers. The output in the database now shows the period decimal symbol, but when I request it through the IIS server it comes comma separated (the IIS server is on another computer, but that also has the default decimal symbol set to period). The IIS server is IIS7 and the SQL Server is 2008 Does anyone have any idea how to fix this? Any help would be greatly appreciated.

    Read the article

  • What's Bringing SharePoint 2007 Server to a hault?

    - by juanlarios
    I've been having issues with my teste environment and I'm hoping someone has run into this problem and can point me in the right direction. I noticed: SharePoint Server Memory is through the roof at times and so is the CPU usage. Most of CPU usage is a sql proccess. Running out of disk space all the time. I looked in the Logs located in the 12 hive and sure enough I have 1G log files that are hard to open because of the size. The following are the 3 error messages that are flooding my SharePoint logs:   04/05/2010 16:02:36.99     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Variations Propagate Page Job Definition', id '{F9A73EB4-90FE-4574-AD99-B4034056F915}' for service '{F89169F9-707B-4588-9ED0-E6D399FE5E3D}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.    04/05/2010 15:59:51.51     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Profile Synchronization', id '{A05E3439-8DCD-449A-9D9E-46D601CACAA2}' for service '{F89169F9-707B-4588-9ED0-E6D399FE5E3D}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.     04/05/2010 15:56:25.53     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Scheduled Unpublish', id '{6298F93F-388D-46B9-809E-CEDBB8659661}' for service '{F89169F9-707B-4588-9ED0-E6D399FE5E3D}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.     04/05/2010 15:54:14.73     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Config Refresh', id '{C42DA970-3DA3-4AA2-94E5-8499C5B80A3E}' for service '{7F6D2CBE-8071-4A30-B313-7C9989FC2D87}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.       I'm googling around but haven't found much. I know one other person posted something about this back in 2008, but no answers were reached. I have already checked the databases to see if any of them have gone offline for whatever reason, but from SQL everything is fine. I recently re-created an SSP and deleted an old ssp. So I thought maybe that was causing it, and who knows? maybe that causes some of the problems or maybe all. I'm running configuration wizard and see if anything changes. Please if someone has had similar issues let me know.

    Read the article

  • Could someone help me understand SQL TDE Database encryption?

    - by SLC
    I don't quite follow how it works. According to the MSDN Article there is a big hierarchy of keys protecting other keys and passwords. At some point the database is encrypted. You query the database which is encrypted, and it works seamlessly. If you're able to simply connect to the database as normal and not have to worry about any of the encryption from a developer point of view, how exactly is it secure? Surely anyone can simply connect and do select * from x and the data is revealed. Sorry my question is a bit scattered, I am just very confused by the article.

    Read the article

  • atftp pcre pattern

    - by CE-SA
    I've a question about the package named 'atftp'. I've got the atftp daemon finally working. Previously I was using tftp-hpa with a custom rule that replaces filenames with capitals into non-capital filenames and replaces the backslashes into forward slashes so that WinPE will boot fine. But in atftp I can't find rules or replacements like that. I'm searching for long, but cannot find or write the right pcre-pattern. Could you help me with this?

    Read the article

  • Maximum Length Of IP Address: 15 (IPv4) & 39(IPv6)

    - by Gopinath
    Problem You are designing a database table for a web application that requires to store IP address of users who visits the site. The IP address is required to be stored a character data in the table. To define size of the character column you need to know maximum length of IP address. So, what is the maximum length of an IP address? Solution The IPv4 version of IP address is in the following format 255.255.255.255 To store IPv4 address we require 15 characters. The IPv6 version of IP address is grouped into sets of 4 hex digits separated by colons, like the below 2001:0db8:85a3:0000:0000:8a2e:0370:7334 To store IPv6 address you require a 39 characters long column. Conclusion As IPv4 and IPv6 are the commonly use protocols, you better define a column with 39 characters length so that both the format address are saved in to the table without any issues. This article titled,Maximum Length Of IP Address: 15 (IPv4) & 39(IPv6), was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Rebuilding CoasterBuzz, Part II: Hot data objects

    - by Jeff
    This is the second post, originally from my personal blog, in a series about rebuilding one of my Web sites, which has been around for 12 years. More: Part I: Evolution, and death to WCF After the rush to get moving on stuff, I temporarily lost interest. I went almost two weeks without touching the project, in part because the next thing on my backlog was doing up a bunch of administrative pages. So boring. Unfortunately, because most of the site's content is user-generated, you need some facilities for editing data. CoasterBuzz has a database full of amusement parks and roller coasters. The entities enjoy the relationships that you would expect, though they're further defined by "instances" of a coaster, to define one that has moved between parks as one, with different names and operational dates. And of course, there are pictures and news items, too. It's not horribly complex, except when you have to account for a name change and display just the newest name. In all previous versions, data access was straight SQL. As so much of the old code was rooted in 2003, with some changes in 2008, there wasn't much in the way of ORM frameworks going on then. Let me rephrase that, I mostly wasn't interested in ORM's. Since that time, I used a little LINQ to SQL in some projects, and a whole bunch of nHibernate while at Microsoft. Through all of that experience, I have to admit that these frameworks are often a bigger pain in the ass than not. They're great for basic crud operations, but when you start having all kinds of exotic relationships, they get difficult, and generate all kinds of weird SQL under the covers. The black box can quickly turn into a black hole. Sometimes you end up having to build all kinds of new expertise to do things "right" with a framework. Still, despite my reservations, I used the newer version of Entity Framework, with the "code first" modeling, in a science project and I really liked it. Since it's just a right-click away with NuGet, I figured I'd give it a shot here. My initial effort was spent defining the context class, which requires a bit of work because I deviate quite a bit from the conventions that EF uses, starting with table names. Then throw some partial querying of certain tables (where you'll find image data), and you're splitting tables across several objects (navigation properties). I won't go into the details, because these are all things that are well documented around the Internet, but there was a minor learning curve there. The basics of reading data using EF are fantastic. For example, a roller coaster object has a park associated with it, as well as a number of instances (if it was ever relocated), and there also might be a big banner image for it. This is stupid easy to use because it takes one line of code in your repository class, and by the time you pass it to the view, you have a rich object graph that has everything you need to display stuff. Likewise, editing simple data is also, well, simple. For this goodness, thank the ASP.NET MVC framework. The UpdateModel() method on the controllers is very elegant. Remember the old days of assigning all kinds of properties to objects in your Webforms code-behind? What a time consuming mess that used to be. Even if you're not using an ORM tool, having hydrated objects come off the wire is such a time saver. Not everything is easy, though. When you have to persist a complex graph of objects, particularly if they were composed in the user interface with all kinds of AJAX elements and list boxes, it's not just a simple matter of submitting the form. There were a few instances where I ended up going back to "old-fashioned" SQL just in the interest of time. It's not that I couldn't do what I needed with EF, it's just that the efficiency, both my own and that of the generated SQL, wasn't good. Since EF context objects expose a database connection object, you can use that to do the old school ADO.NET stuff you've done for a decade. Using various extension methods from POP Forums' data project, it was a breeze. You just have to stick to your decision, in this case. When you start messing with SQL directly, you can't go back in the same code to messing with entities because EF doesn't know what you're changing. Not really a big deal. There are a number of take-aways from using EF. The first is that you write a lot less code, which has always been a desired outcome of ORM's. The other lesson, and I particularly learned this the hard way working on the MSDN forums back in the day, is that trying to retrofit an ORM framework into an existing schema isn't fun at all. The CoasterBuzz database isn't bad, but there are design decisions I'd make differently if I were starting from scratch. Now that I have some of this stuff done, I feel like I can start to move on to the more interesting things on the backlog. There's a lot to do, but at least it's fun stuff, and not more forms that will be used infrequently.

    Read the article

  • What criteria would I use SQL Stream Insight vs TPL Dataflow [closed]

    - by makerofthings7
    There is an add-in to the Task Parallel Library (TPL) called TPL Dataflow that allows a variety of data processing scenarios. It seems that there are some parallels to the SQL Stream Insight product, however since SQL's Stream Insight has some interesting licensing around it, and it has a better performance depending on what license I get... I found myself asking myself should I use TPL Dataflow and not have any licensing issues, and possibly better performance. Can anyone tell me if performance is a valid criteria for comparing SQL Stream Insight vs TPL Dataflow? What other criteria should I be looking at when comparing the two?

    Read the article

  • Why does linux-image-virtual depend on a generic kernel now?

    - by ændrük
    The linux-image-virtual metapackage has historically provided a kernel that is specifically designed for use in virtual machines: Ubuntu 8.04: linux-image-2.6.24-32-virtual Ubuntu 10.04: linux-image-2.6.32-44-virtual Ubuntu 11.10: linux-image-3.0.0-26-virtual Ubuntu 12.04: linux-image-3.2.0-32-virtual Apparently, this has now changed: Ubuntu 12.10: linux-image-3.5.0-17-generic What's the explanation? Is this still the correct kernel to use in a virtual machine?

    Read the article

  • What is a safe ulimit ceiling?

    - by Kaustubh P
    This is the output of ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited This is a 64bit install, and I would like to increase the max-open files from 1024 to a more heady limit such as 5000. Will that be any problem? Will it cause instability? Thanks.

    Read the article

  • query to select topic with highest number of comment +support+oppose+views

    - by chetan
    table schema title description desid replyto support oppose views browser used a1 none 1 1 12 - bad topic b2 1 2 3 14 sql database a3 none 4 5 34 - crome b4 1 3 4 12 Topic desid starts with a and comment desid starts with b .For comment replyto is the desid of topic . Its easy to select * with highest number of support+oppose+views by query "select * from [DB_user1212].[dbo].[discussions] where desid like 'a%' order by (sup+opp+visited) desc" For highest (comment +support+oppose+views ) i tried "select * from [DB_user1212].[dbo].[discussions] where desid like 'a%' order by ((select count(*) from [DB_user1212].[dbo].[discussions] where replyto = desid )+sup+opp+visited) desc" but it didn't work . Because its not possible to send desid from outer query to innner subquery .

    Read the article

  • How to determine the source of a request in a distributed service system?

    - by Kabumbus
    Map/Reduce is a great concept for sorting large quantities of data at once. What to do if you have small parts of data and you need to reduce it all the time? Simple example - choosing a service for request. Imagine we have 10 services. Each provides services host with sets of request headers and post/get arguments. Each service declares it has 30 unique keys - 10 per set. service A: name id ... Now imagine we have a distributed services host. We have 200 machines with 10 services on each. Each service has 30 unique keys in there sets. but now to find to which service to map the incoming request we make our services post unique values that map to that sets. We can have up to or more than 10 000 such values sets on each machine per each service. service A machine 1 name = Sam id = 13245 ... service A machine 1 name = Ben id = 33232 ... ... service A machine 100 name = Ron id = 777888 ... So we get 200 * 10 * 30 * 30 * 10 000 == 18 000 000 000 and we get 500 requests per second on our gateway each containing 45 items 15 of which are just noise. And our task is to find a service for request (at least a machine it is running on). On all machines all over cluster for same services we have same rules. We can first select to which service came our request via rules filter 10 * 30. and we will have 200 * 30 * 10 000 == 60 000 000. So... 60 mil is definitely a problem... I hope to get on idea of mapping 30 * 10 000 onto some artificial neural network alike Perceptron that outputs 1 if 30 words (some hashes from words) from the request are correct or if less than Perceptron should return 0. And I’ll send each such Perceptron for each service from each machine to gateway. So I would have a map Perceptron <-> machine for each service. Can any one tall me if my Perceptron idea is at least “sane”? Or normal people do it some other way? Or if there are better ANNs for such purposes?

    Read the article

  • How do you avoid working on the wrong branch?

    - by henginy
    Being careful is usually enough to prevent problems, but sometimes I need to double check the branch I'm working on (e.g. "hmm... I'm in the dev branch, right?") by checking the source control path of a random file. In looking for an easier way, I thought of naming the solution files accordingly (e.g. MySolution_Dev.sln) but with different file names in each branch, I can't merge the solution files. It's not that big of a deal but are there any methods or "small tricks" you use to quickly ensure you're in the correct branch? I'm using Visual Studio 2010 with TFS 2008.

    Read the article

  • SQL vs. Oracle Live Debate (AKA Smackdown!)

    - by Peter W. DeBetta
    A few years ago I was speaking at a conference in Raleigh, NC where Ted Neward and I found a fun way to promote a Java vs. .NET debate that was planned one evening. We stood in the middle of a crowd during one of the breaks and starting “arguing” about Java vs. .NET with one another. Our voice levels quickly raised and we ended it by slapping each other across the face with a glove to request a challenge. It was a great way to segue to our announcing of the actual debate planned later that evening....(read more)

    Read the article

  • Denali CTP3 - Semantic Search 2 (Lots of documents)

    - by sqlartist
    Hi again, I thought I would improve on the previous post by actually putting a decent about of content into the Filetable - this time I used the opensource DMOZ Health document repository which contains 5,880 files inside 220 folders. The files are all html and are pretty small in size. The entire document collection is about 120Mb unzipped and 30Mb zipped. If any one is interested in testing this collection drop me a note and I will upload the dmoz_health repository archive to Skydrive. This time...(read more)

    Read the article

  • Best scripting language for project [on hold]

    - by Dave
    This is a subjective question, but I don't know where else to ask it. I'd appreciate it if someone could direct me to an appropriate scripting language for my project. I'm a little new at this so I'd appreciate any help. The project is a website that will display a list of photo subject groups (such as "nature" "people" "sports" etc) on the home page. The photos will all be in subdirectories of the main photo directory (photos) and each subject group will represent a subdirectory in photos. For example in directory photos there might be 3 subdirectories, "nature" "people" "sports" and in each of those subdirectories there will be the actual photos. The idea is that when the website owner wants to update/add/delete a subject group all he has to do is add, delete or update a subdirectory of the photos directory. This means, I think, that I need a scripting language that can read the directories and files in the website and then send a web page with the information in it. What is the simplest and easiest scripting language to do this in? Any ideas? Thanks

    Read the article

  • Bleeding Edge 2012 – session material

    - by Hugo Kornelis
    As promised, here are the slide deck and demo code I used for my presentation at the Bleeding Edge 2012 conference in Laško, Slovenia. Okay, I promised to have them up by Tuesday or Wednesday at worst, and it is now Saturday – my apologies for the delay. Thanks again to all the attendees of my session. I hope you enjoyed it, and if you have any question then please don’t hesitate to get in touch with me. I had a great time in Slovenia, both during the event and in the after hours. Even if everything...(read more)

    Read the article

  • How to control fan speed and temperatures on Asus A8Js laptop?

    - by Azeworai
    Hi, I have tried installing asusfan and lm-sensors but I'm unable to control my fans to cool my laptop down sufficiently. Currently it overheats at about 100 degrees celsius and my sensors output somehow does not have any fan information on it: jackson@OLYMPIA:~$ sensors acpitz-virtual-0 Adapter: Virtual device temp1: +69.0°C (crit = +110.0°C) coretemp-isa-0000 Adapter: ISA adapter Core 0: +66.0°C (high = +100.0°C, crit = +100.0°C) coretemp-isa-0001 Adapter: ISA adapter Core 1: +66.0°C (high = +100.0°C, crit = +100.0°C) I have checked my bios and there isn't any fan settings there. I can consistently overheat just by converting a video via Handbrake. I have ubuntu-desktop installed for a GUI. Is there a way for me to control my fans to start spinning before it reaches a critical temperature and kills itself?

    Read the article

  • Cant finish upgrade from 11.10 to 12 on VPS based on Parallels Virtuozzo Containers, due to libc6

    - by Carmageddon
    I was stuck with this problem near the end of an upgrade: WARNING: this version of the GNU libc requires kernel version 2.6.24 or later. Please upgrade your kernel before installing glibc. The installation of a 2.6 kernel could ask you to install a new libc first, this is NOT a bug, and should NOT be reported. In that case, please add lenny sources to your /etc/apt/sources.list and run: apt-get install -t lenny linux-image-2.6 Their suggested stepds dont work on VPS, and after googling, I came up to this: Why did my upgrade to 12.04 fail with "glibc not found" or "libc6" or "requires kernel 2.6.24" error? There is comment by izx which explains my problem and proposes a workaround (might take a while to convince the guys to upgrade the kernel..). However, when I follow his instructions, I get error: # apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libc-dev-bin libc6 libc6-dev libnih1 Suggested packages: glibc-doc The following packages will be upgraded: libc-dev-bin libc6 libc6-dev libnih1 4 upgraded, 0 newly installed, 0 to remove and 394 not upgraded. 1 not fully installed or removed. Need to get 0 B/7737 kB of archives. After this operation, 233 kB disk space will be freed. Do you want to continue [Y/n]? y locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by locale) locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found (required by locale) Preconfiguring packages ... (Reading database ... 35175 files and directories currently installed.) Preparing to replace libc6-dev 2.13-20ubuntu5.2 (using .../libc6-dev_2.15-0ubuntu10.3_amd64.deb) ... Unpacking replacement libc6-dev ... Preparing to replace libc-dev-bin 2.13-20ubuntu5.2 (using .../libc-dev-bin_2.15-0ubuntu10.3_amd64.deb) ... Unpacking replacement libc-dev-bin ... Preparing to replace libc6 2.13-20ubuntu5.2 (using .../libc6_2.15-0ubuntu10.3_amd64.deb) ... locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by locale) locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found (required by locale) Checking for services that may need to be restarted... Checking init scripts... runlevel:/var/run/utmp: No such file or directory Checking for services that may need to be restarted... Checking init scripts... runlevel:/var/run/utmp: No such file or directory WARNING: init script for samba not found. Stopping some services possibly affected by the upgrade (will be restarted later): cron: stopping...done. WARNING: this version of the GNU libc requires kernel version 2.6.24 or later. Please upgrade your kernel before installing glibc. The installation of a 2.6 kernel _could_ ask you to install a new libc first, this is NOT a bug, and should *NOT* be reported. In that case, please add lenny sources to your /etc/apt/sources.list and run: apt-get install -t lenny linux-image-2.6 Then reboot into this new kernel, and proceed with your upgrade dpkg: error processing /var/cache/apt/archives/libc6_2.15-0ubuntu10.3_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 Processing triggers for man-db ... locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by locale) locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found (required by locale) Errors were encountered while processing: /var/cache/apt/archives/libc6_2.15-0ubuntu10.3_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I also attempted to manually grab the .deb package and install it using dpkg -i, but getting: locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by locale) Even though the file is: libc-bin_2.15-0ubuntu10+openvz0_amd64.deb

    Read the article

  • Is hierarchical product backlog a good idea in TFS 2012-2013?

    - by Matías Fidemraizer
    I'd like to validate I'm not in the wrong way. My team project is using Visual Studio Scrum 2.x. Since each area/product has a lot of kind of requirements (security, user interface, HTTP/REST services...), I tried to manage this creating "parent backlogs" which are "open forever" and they contain generic requirements. Those parent backlogs have other "open forever" backlogs, and/or sprint backlogs. For example: HTTP/REST Services (forever) ___ Profiles API (forever) ________ POST profile (forever) _______________ We need a basic HTTP/REST profiles' API to register new user profiles (sprint backlog) Is it the right way of organizing the product backlog? Note: I know there're different points of view and that would be right for some and wrong for others. I'm looking for validation about if this is a possible good practice on TFS with Visual Studio Scrum.

    Read the article

  • ifconfig not showing all IPs bound to the machine

    - by pankaj sharma
    I have configured multiple IP addresses on a ubuntu box, but when I run ifconfig it shows just one of them. I am able, however, to ping all other adresses assigned to this machine. /etc/network/interface contents: # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address 192.168.202.11 netmask 255.255.255.0 network 192.168.202.0 broadcast 192.168.202.255 gateway 192.168.202.1 # dns-* options are implemented by the resolvconf package, if installed dns-search idil.dz1.da auto eth0:1 iface eth0:1 inet static address 192.168.202.12 netmask 255.255.255.0 auto eth0:2 iface eth0:2 inet static address 192.168.202.13 netmask 255.255.255.0 auto eth0:3 iface eth0:3 inet static address 192.168.202.14 netmask 255.255.255.0 auto eth0:4 iface eth0:4 inet static address 192.168.202.15 netmask 255.255.255.0 auto eth0:5 iface eth0:5 inet static address 192.168.202.16 netmask 255.255.255.0 but the output of the ifconfig is only: 192.168.202.11

    Read the article

  • Where does node.js install to?

    - by Ash Scott
    I'm trying to install a script, which is a clone of a game, and uses node.js for it. Now, the documentation says I should copy the node.exe (windows) and put it where the clone is. Now, I can't find the node.exe ubuntu equivalent, I can't even find where it's installed?! Don't really want this hosted on a windows machine due to licensing. Here's a snipet from the doc: Download : Node.js (Install Button) Go where you are install Node.js (For Windows 8 it's C:/Programms/nodejs) Copy node.exe and paste on the clone folder Now I need to do this in ubuntu, however I can't find where node is installed to? Any ideas?

    Read the article

  • Analyzing the errorlog

    - by TiborKaraszi
    How often do you do this? Look over each message (type) in the errorlog file and determine whether this is something you want to act on. Sure, some (but not all) of you have some monitoring solution in place, but are you 100% confident that it really will notify for all messages that you might find interesting? That there isn't even one little message hiding in there that you would find valuable knowing about? Or how about messages that you typically don't are about, but knowing that you have a high...(read more)

    Read the article

  • How do I restrict access to a directory for a specific user through samba?

    - by dummzeuch
    I have got a sub directory of a shared directory that I use Samba with and have set it to be accessible by only one user: $ cd /mnt/SomeSambaShare $ ls -lad SomeDir drwx--S--- 23 SomeUser SomeGroup 4096 2012-07-26 07:44 SomeDir I cannot access this directory as a linux user other than SomeUser. But I still can access this directory using a different Samba user than SomeUser. Why is that? And how do I prevent this?

    Read the article

< Previous Page | 682 683 684 685 686 687 688 689 690 691 692 693  | Next Page >