Search Results

Search found 28957 results on 1159 pages for 'single instance'.

Page 530/1159 | < Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >

  • Why does cd print when run in command substitution?

    - by reasgt
    If I use the 'cd' BASH built-in in a command substitution, it prints extra stuff to stdout, but only when piped to, eg., less. $ echo `cd .` # The output is a single newline, appended by echo. $ echo `cd .` | less # less displays: ESC]2;my.hostname.com - tmp/testenv^G (END) What's going on there? This behavior isn't documented in the bash man page for cd. Obviously, running just 'cd' in a command substitution is silly, but something like NEWDIR=`cd mypath; pwd` could be useful. I solved this by instead using NEWVAR=`cd mypath > /dev/null 2>&1; pwd` but I still want to know what's going on. Bash Version: GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu) Copyright (C) 2005 Free Software Foundation, Inc. Distro: Scientific Linux SL release 5.5 (Boron)

    Read the article

  • VMWare Guest NIC Teaming

    - by Justin Popa
    We're looking to add additional bandwidth to a VM running on an ESXi cluster running 5.1. How can I team these within the VM? I suspect I need to add a second e1000 and then install some Intel software to team them. Any idea which version of Intel driver? Is there some better software to use? EDIT: Sorry, neglected some information. The guest OS is Win2k8R2. The physical NICs on the host are 1Gbps. The reason this has come up is we are seeing the VM hitting near cap on the capability of a single 1Gbps link (Usually at 100-110MBps, bursting to 130s, but I think that may just be a UI math lie) and we're interested in seeing if adding an additional NIC in a teamed setting will increase the overall throughput.

    Read the article

  • git/gitolite: big git repo with several mini projects

    - by Jay
    I'm pretty new to the whole version control thing, and even more so with git. I recently installed git on my computer(s) and set it up on a NAS server. However, I have several client folders with several project folders per client folder. Each one of these client folders is a giant repo, encompassing every project inside it. What I'm wondering is, is there a way to break this apart? So, for instance: The NAS is my 'origin', and has gitolite installed On computer1 I have every project folder in a client folder ever created (clean branch), In computer2 I do not a new checkout of the client branch (because all the projects in that branch are all completed and I don't need a working copy of it), but I do have a brand new project folder for that client "newproject". Is there a way to commit and push to the NAS repo from computer2? Or perhaps is there a better way of organizing all this?

    Read the article

  • Handling range in CNAME

    - by Imran
    We have different set of CNAMEs pointing to different subdomains. These subdomains (a.domain.com, b.domain.com) are pointing to different IPs on different machines. # Server A a1.domain.com pointing to a.domain.com a2.domain.com pointing to a.domain.com .. aN.domain.com pointing to a.domain.com # Server B b1.domain.com pointing to b.domain.com b2.domain.com pointing to b.domain.com .. bN.domain.com pointing to b.domain.com Currently, we have to add individual CNAME entries (eg. a1... aN) against a single subdomain (a.dominan.com). We repeat the above process for every new server which is actually another subdomain (e.g. c.domain.com). Is there a way we can specify a range of CNAMEs (e.g. [a1..a25].domain.com point to a.domain.com) instead of adding separate CNAME etnries? Is there any possibility to handle this at DNS or webserver (apache or Nginx) level?

    Read the article

  • Mono on Linux: Apache or Nginx

    - by Furism
    Hi, I'm developing an ASP.NET application that will be run under Linux/Mono for various reasons (mostly to stay away from IIS, quite frankly). Of course the first web server I had in mind was Apache. But Apache, for all its advantages, adds a lot of overhead. Also, the application I'm building needs to be highly scalable and performance is one of the main concern. Apache has, obviously, a very good reputation and its record speaks for itself, but I don't need things like Reverse Proxy or Load Balancing because dedicated network devices would be used for that. So those modules from Apache will never be used. So basically my question is: since Nginx seems to fit exactly needs, is there any caveat I should be aware of? For instance, is Nginx renowned to be particularity safe? When security flaws are detected, how fast are they patched? Any insight on the pros and cons of using either of those servers in conjunction with Mono is welcome.

    Read the article

  • Chromebook, Crouton & “External Drive” is it possible to rename the folder?

    - by Cyril N.
    I have a Chromebook where I installed Crouton. I have plugged an SD Card and Chromebook detects it as "External Drive". In my Ubuntu instance, it's located at /media/removable/External Drive/ but this poses some problems for executing some applications I have installed on that external drive. In order to fix the problem, I need to remove all the spaces in the path, which is located in "External Drive". My question is simple, is it possible to move/rename the mount name "External Drive" to something else and doing it automatically at every mount/boot ? Thank you for your help !

    Read the article

  • What sort of attack URL is this?

    - by Asker
    I set up a website with my own custom PHP code. It appears that people from places like Ukraine are trying to hack it. They're trying a bunch of odd accesses, seemingly to detect what PHP files I've got. They've discovered that I have PHP files called mail.php and sendmail.php, for instance. They've tried a bunch of GET options like: http://mydomain.com/index.php?do=/user/register/ http://mydomain.com/index.php?app=core&module=global§ion=login http://mydomain.com/index.php?act=Login&CODE=00 I suppose these all pertain to something like LiveJournal? Here's what's odd, and the subject of my question. They're trying this URL: http://mydomain.com?3e3ea140 What kind of website is vulnerable to a 32-bit hex number?

    Read the article

  • MaxClients in apache. How to know the size of my proccess?

    - by Larry
    From http://httpd.apache.org/docs/2.2/misc/perf-tuning.html The single biggest hardware issue affecting webserver performance is RAM. A webserver should never ever have to swap, as swapping increases the latency of each request beyond a point that users consider "fast enough". This causes users to hit stop and reload, further increasing the load. You can, and should, control the MaxClients setting so that your server does not spawn so many children it starts swapping. This procedure for doing this is simple: determine the size of your average Apache process, by looking at your process list via a tool such as top, and divide this into your total available memory, leaving some room for other processes. The main issue is that I can't understand how to know the size, because, well i have the size of httpd on no more of 3888 But, if we need to determine the number for MaxClients, and I have 4GB of RAM, so I get: 972, so I should use like 900 in the MaxClients?

    Read the article

  • Web server (IIS) and database mirroring (Postgresql)

    - by Timka
    Recently our web-server crashed and we had to recover everything from a backup which took the whole day(totally unacceptable in our business). So my question is, how can I create a complete mirror of the server that I can use (switch dns to) in case the same disaster happens in the future? Our main server is on Amazon with Windows 2008/IIS + Postgresql 9.1. I was thinking on creating the same server on a different location as a complete mirror with the database replication. But I'm not sure how to implement IIS instance mirroring over the internet... So my question is, how can I create a complete mirror of the server that I can use (switch dns to) in case the same disaster happens in the future?

    Read the article

  • TRADACOMS Support in B2B

    - by Dheeraj Kumar
    TRADACOMS is an initial standard for EDI. Predominantly used in the retail sector of United Kingdom. This is similar to EDIFACT messaging system, involving ecs file for translation and validation of messages. The slight difference between EDIFACT and TRADACOMS is, 1. TRADACOMS is a simpler version than EDIFACT 2. There is no Functional Acknowledgment in TRADACOMS 3. Since it is just a business message to be sent to the trading partner, the various reference numbers at STX, BAT, MHD level need not be persisted in B2B as there is no Business logic gets derived out of this. Considering this, in AS11 B2B, this can be handled out of the box using Positional Flat file document plugin. Since STX, BAT segments which define the envelope details , and part of transaction, has to be sent from the back end application itself as there is no Document protocol parameters defined in B2B. These would include some of the identifiers like SenderCode, SenderName, RecipientCode, RecipientName, reference numbers. Additionally the batching in this case be achieved by sending all the messages of a batch in a single xml from backend application, containing total number of messages in Batch as part of EOB (Batch trailer )segment. In the case of inbound scenario, we can identify the document based on start position and end position of the incoming document. However, there is a plan to identify the incoming document based on Tradacom standard instead of start/end position. Please email to [email protected] if you need a working sample.

    Read the article

  • Can't change to Korean-named directory on my debian server

    - by DaLynX
    I made a rsync backup of some directories from a macbook laptop to a debian server. Some of these have korean characters (Hangeul) in their names. After fixing my server's locale, it displays well when I do a ls for instance. But I can't cd to it. Example: $ ls -1 | head ??? dirA dirB … But if try to go browse that directory: $ cd ? ? ? cd: 3: can't cd to ??? Any idea what's wrong and how to fix it ?

    Read the article

  • Photo/Video gallery for Ubuntu web server

    - by Andrew
    I'm trying to have a gallery that can display images as well as videos for visitors to my web server. I'm running Ubuntu 12.10, and have Apache installed. All my images and videos are in /var/www/media. I've taken a look at bbgallery, which is simple enough for me, but I don't think it supports video. I've also looked at Single File PHP Gallery, but it doesn't support video. Does anyone know of a gallery that supports video as well, which I can use for my web server? EDIT: I do not have a database.

    Read the article

  • Laptop wakes from sleep, once, due to audio controller (Windows 7)

    - by stijn
    The laptop is a recent Dell XPS 15z and the problem is as follows (reproducible about 90% of tries): put laptop to sleep using either Start-Sleep or closing the lid laptop goes to sleep after about 5 seconds, but instantly wakes again showing a black screen (touching the keyboard or moving the mouse shows the login screen one normally gets after wake) login again, put laptop to sleep latop stays in sleep mode output of powercfg -lastwake after the first instant wake shows the audio controller is responsible. Why would that be, why only the first try, and how to fix this? Wake History Count - 1 Wake History [0] Wake Source Count - 1 Wake Source [0] Type: Device Instance Path: PCI\VEN_8086&DEV_1C20&SUBSYS_04461028&REV_05\3&11583659&0&D8 Friendly Name: Description: High Definition Audio Controller Manufacturer: Microsoft

    Read the article

  • Using QoS to prioritize IP addresses

    - by Tristan
    I have a Western Digital N900 router. I was hoping I'd be able to throttle users based on their MAC address with it, which isn't possible sadly. Seems simple in principle though, duh. The battle against bandwidth hogging roomates rages on. Could I just set the local IP range to their IP, and then set the Local port range to every single port in existence. Then prioritize their IP to lower than mine? Will this work? What are all the ports? And what's the difference between Local and Remote IPs or Ports? Name: Roomate, Priority: Low, Protocol: TCP or UDP ??, Local IP Range: .101 to .101, Local Port Range: 0 to infinity, Remote IP Range: ? to ?, Remote Port Range: ? to ?

    Read the article

  • Tutorial: Getting Started with the NoSQL JavaScript / Node.js API for MySQL Cluster

    - by Mat Keep
    Tutorial authored by Craig Russell and JD Duncan  The MySQL Cluster team are working on a new NoSQL JavaScript connector for MySQL. The objectives are simplicity and high performance for JavaScript users: - allows end-to-end JavaScript development, from the browser to the server and now to the world's most popular open source database - native "NoSQL" access to the storage layer without going first through SQL transformations and parsing. Node.js is a complete web platform built around JavaScript designed to deliver millions of client connections on commodity hardware. With the MySQL NoSQL Connector for JavaScript, Node.js users can easily add data access and persistence to their web, cloud, social and mobile applications. While the initial implementation is designed to plug and play with Node.js, the actual implementation doesn't depend heavily on Node, potentially enabling wider platform support in the future. Implementation The architecture and user interface of this connector are very different from other MySQL connectors in a major way: it is an asynchronous interface that follows the event model built into Node.js. To make it as easy as possible, we decided to use a domain object model to store the data. This allows for users to query data from the database and have a fully-instantiated object to work with, instead of having to deal with rows and columns of the database. The domain object model can have any user behavior that is desired, with the NoSQL connector providing the data from the database. To make it as fast as possible, we use a direct connection from the user's address space to the database. This approach means that no SQL (pun intended) is needed to get to the data, and no SQL server is between the user and the data. The connector is being developed to be extensible to multiple underlying database technologies, including direct, native access to both the MySQL Cluster "ndb" and InnoDB storage engines. The connector integrates the MySQL Cluster native API library directly within the Node.js platform itself, enabling developers to seamlessly couple their high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability. The following sections take you through how to connect to MySQL, query the data and how to get started. Connecting to the database A Session is the main user access path to the database. You can get a Session object directly from the connector using the openSession function: var nosql = require("mysql-js"); var dbProperties = {     "implementation" : "ndb",     "database" : "test" }; nosql.openSession(dbProperties, null, onSession); The openSession function calls back into the application upon creating a Session. The Session is then used to create, delete, update, and read objects. Reading data The Session can read data from the database in a number of ways. If you simply want the data from the database, you provide a table name and the key of the row that you want. For example, consider this schema: create table employee (   id int not null primary key,   name varchar(32),   salary float ) ENGINE=ndbcluster; Since the primary key is a number, you can provide the key as a number to the find function. function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find('employee', 0, onData); }; function onData = function(err, data) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(data));   ... use data in application }; If you want to have the data stored in your own domain model, you tell the connector which table your domain model uses, by specifying an annotation, and pass your domain model to the find function. var annotations = new nosql.Annotations(); function Employee = function(id, name, salary) {   this.id = id;   this.name = name;   this.salary = salary;   this.giveRaise = function(percent) {     this.salary *= percent;   } }; annotations.mapClass(Employee, {'table' : 'employee'}); function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData); }; Updating data You can update the emp instance in memory, but to make the raise persistent, you need to write it back to the database, using the update function. function onData = function(err, emp) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp); // oops, session is out of scope here }; Using JavaScript can be tricky because it does not have the concept of block scope for variables. You can create a closure to handle these variables, or use a feature of the connector to remember your variables. The connector api takes a fixed number of parameters and returns a fixed number of result parameters to the callback function. But the connector will keep track of variables for you and return them to the callback. So in the above example, change the onSession function to remember the session variable, and you can refer to it in the onData function: function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData, session); }; function onData = function(err, emp, session) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp, onUpdate); // session is now in scope }; function onUpdate = function(err, emp) {   if (err) {     console.log(err);     ... error handling   } Inserting data Inserting data requires a mapped JavaScript user function (constructor) and a session. Create a variable and persist it: function onSession = function(err, session) {   var data = new Employee(999, 'Mat Keep', 20000000);   session.persist(data, onInsert);   } }; Deleting data To remove data from the database, use the session remove function. You use an instance of the domain object to identify the row you want to remove. Only the key field is relevant. function onSession = function(err, session) {   var key = new Employee(999);   session.remove(Employee, onDelete);   } }; More extensive queries We are working on the implementation of more extensive queries along the lines of the criteria query api. Stay tuned. How to evaluate The MySQL Connector for JavaScript is available for download from labs.mysql.com. Select the build: MySQL-Cluster-NoSQL-Connector-for-Node-js You can also clone the project on GitHub Since it is still early in development, feedback is especially valuable (so don't hesitate to leave comments on this blog, or head to the MySQL Cluster forum). Try it out and see how easy (and fast) it is to integrate MySQL Cluster into your Node.js platforms. You can learn more about other previewed functionality of MySQL Cluster 7.3 here

    Read the article

  • Why would you ever set MaxKeepAliveRequests to anything but unlimited?

    - by Jonathon Reinhart
    Apache's KeepAliveTimeout exists to close a keep-alive connection if a new request is not issued within a given period of time. Provided the user does not close his browser/tab, this timeout (usually 5-15 seconds) is what eventually closes most keep-alive connections, and prevents server resources from being wasted by holding on to connections indefinitely. Now the MaxKeepAliveRequests directive puts a limit on the number of HTTP requests that a single TCP connection (left open due to KeepAlive) will serve. Setting this to 0 means an unlimited number of requests are allowed. Why would you ever set this to anything but "unlimited"? Provided a client is still actively making requests, what harm is there in letting them happen on the same keep-alive connection? Once the limit is reached, the requests still come in, just on a new connection. The way I see it, there is no point in ever limiting this. What am I missing?

    Read the article

  • Entering new user data into AD LDS

    - by Robert Koritnik
    I need some help configuring AD LDS (Active Directory Lightweight Directory Services). I'm not an administrator, have never configured domains and I don't have a clue how to add new users to existing domains. The thing is I need to develop an app on top of Sharepoint 2010 that must be connected to AD. I've chosen AD LDS because I can install it on Windows 7 and it acts as an active directory even though there's no domain controller present in the network. What I've done so far: I've installed AD LDS I've added a new instance with appication directory partition name DN=Air,DC=Watanabe,DC=pri I can connect to it using ADSI Edit and see all kinds of strange But now I don't know what to do? When it opens I can see the window below, but where's next? Can anybody give me some guidelines, how can I add domain users, so I can use them in my app AD required app?

    Read the article

  • Insulate hosted client domains from server IP address change?

    - by babtek
    I will be hosting web content for many client domains on a single IP address (with a web hosting company, not inhouse machine). Initially, I must give client some information to configure their registrar to point the domain to my server. I want client domains insulated from a potential IP address change, so if I change hosts/IP address they don't have to reconfigure anything with their registrar. Is this reasonably possible without running my own nameserver? If so, what would be the smartest way to make it happen? Instruct clients to make CNAME record? Use some type of DNS management service that clients would use as a nameserver?

    Read the article

  • ASP.NET Connections Spring 2012 Talks and Code

    - by Stephen.Walther
    Thank you everyone who attended my ASP.NET Connections talks last week in Las Vegas. I’ve attached the slides and code for the three talks that I delivered: Using jQuery to interact with the Server through Ajax– In this talk, I discuss the different ways to communicate information between browser and server using Ajax. I explain the difference between the different types of Ajax calls that you can make with jQuery. I also discuss the differences between the JavaScriptSerializer, the DataContractJsonSerializer, and the JSON.NET serializer. ASP.NET Validation In-Depth– In this talk, I distinguish between View Model Validation and Domain Model Validation. I demonstrate how you can use the validation attributes (including the new .NET 4.5 validation attributes), the jQuery Validation library, and the HTML5 input validation attributes to perform View Model Validation. I then demonstrate how you can use the IValidatableObject interface with the Entity Framework to perform Domain Model Validation. Using the MVVM Pattern with JavaScript Views – In this talk, I discuss how you can create single page applications (SPA) by taking advantage of the open-source KnockoutJS library and the ASP.NET Web API. Be warned that the sample code is contained in Visual Studio 11 Beta projects. If you don’t have this version of Visual Studio, then you will need to open the code samples in Notepad. Also, I apologize for getting the code for these talks posted so slowly. I’ve been down with a nasty case of the flu for the past week and haven’t been able to get to a computer.

    Read the article

  • Bandwidth Control on our Internet Connection

    - by AlamedaDad
    Hi all, I have Covad dual/bonded T1 service in our office coming through a Cisco 1841 and then through a Sonicwall 3060Pro/Enhanced SW firewall. The problem I'm looking for some input on is how to limit the amount of bandwidth any single user/PC can user for downloading a file from the Internet. It's become an issue that when one person happens to download let's say an ~300MB file, normal internet access for the other employees slows to a crawl. I've seen through MRTG that in fact usage of the circuit jumps to the full 3mb for the duration of the download and then drops. Is it possible to control this? I'm not familiar with QOS or the like so I'm not sure. Any help on this would be appreciated. Thanks...Michael

    Read the article

  • Restore data from overwritten LVM

    - by Matthias Bayer
    I lost all of my data (8 TB) which I collected over the past few years yesterday because I made some seriuos mistakes during the remounting of my LVM. I run a XenServer5.6 installation with additional 4 harddisks for data storage. An LVM over those 4 HDDs was used to store all of my data. Yesterday, I reinstalled XenServer and wanted to mount my old Harddrives and add the LVM. I run xe sr-create [...] for all disks (/dev/sdb .. /dev/sde), but that was totally wrong. This command deletes the old LVM on the disks and created an new, empty lvm on every single disk with no partitions. No i got 4 empty harddrives :( Is it possible to recover some data from that lost LVM volumes? I have no clue how to do it because i deleted all informations about the old LVM. Is there a way to access the files insed that old lvm directly?

    Read the article

  • The Strange History of the Honeywell Kitchen Computer

    - by Jason Fitzpatrick
    In 1969 the Honeywell corporation released a $10,000 kitchen computer that weighed 100 pounds, was as big as a table, and required advanced programming skills to use. Shockingly, they failed to sell a single one. Read on to be dumbfounded by how ahead of (and out of touch with) its time the Honeywell Kitchen Computer was. Wired delves into the history of the device, including how difficult it was to use: Now try to imagine all that in late 1960s kitchen. A full H316 system wouldn’t have fit in most kitchens, says design historian Paul Atkinson of Britain’s Sheffield Halam University. Plus, it would have looked entirely out of place. The thought that an average person, like a housewife, could have used it to streamline chores like cooking or bookkeeping was ridiculous, even if she aced the two-week programming course included in the $10,600 price tag. If the lady of the house wanted to build her family’s dinner around broccoli, she’d have to code in the green veggie as 0001101000. The kitchen computer would then suggest foods to pair with broccoli from its database by “speaking” its recommendations as a series of flashing lights. Think of a primitive version of KITT, without the sexy voice. Hit up the link below for the full article. How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It

    Read the article

  • Licensing a JavaScript library

    - by Kendall Frey
    I am developing a free, open-source (duh) JavaScript library, and wondering how to license it. I was considering the GNU GPL, but I heard that I must distribute the license with the software, and I'm not sure anymore. I would like the library to be available much like jQuery: In a free, downloadable script, preferably in either original or minified form. Am I mistaken about the GNU GPL license terms? jQuery is dual licensed under GNU GPL or MIT licenses. How does the GPL apply to single script files like that? Can I license my library with nothing more than a few sentences in the script file? Is there another license that better suits my needs? What would be nice is a license that allows you to put the URL in the source, for people to read if they want. I don't know that many do, unless I am mistaken. I am generally looking to release the library as free software like the GPL specifies, but don't want to have to force licensees to download the full license unless they wish to read it.

    Read the article

  • Shell Script Launching Child Processes

    - by Matt James
    Disclaimer: I'm totally new to shell scripting, but have quite a bit of experience in other languages like PHP and Obj-C. I'm writing my first daemon script. Here are the goals: I want it to run in the background I want it to be triggered by an init.d script that includes start/stop/restart commands I want each process in a loop to trigger its own subprocess. When the parent process kicked off by the init.d script is killed, I want the subprocesses to die as well. Essentially, I'm looking for the same kind of behavior that appears to be very common among software like apache, spamd, dovecot, etc. But, based on my research, I haven't found a single, simple answer as to how this kind of thing is achieved. Any help is greatly appreciated.

    Read the article

  • How to change controller numbering/enumeration in Solaris 10?

    - by Jim
    After moving a Solaris 10 server to a new machine, the rpool disk is now c1t0d0. We have some third party applications hard coded for c0t0d0. How can I change the controller enumeration on this machine? There is no longer a c0. I've tried rebuilding the /etc/path_to_inst, but the instance numbers don't seem to match up with the controller numbers. Also, it's not clear if i86pc platforms use this file. I've tried devfsadm -C to clear the dangling links, but I'm not sure how to cause devfsadm to start numbering from 0 again (or force certain devices in the tree to a specific controller number). Next I am going to try to create the symlinks manually in /dev/dsk and rdsk to point to the correct /devices. I feel like I am going way off path here. Any suggestions? Thanks

    Read the article

< Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >