Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 705/1338 | < Previous Page | 701 702 703 704 705 706 707 708 709 710 711 712  | Next Page >

  • Using QoS to prioritize IP addresses

    - by Tristan
    I have a Western Digital N900 router. I was hoping I'd be able to throttle users based on their MAC address with it, which isn't possible sadly. Seems simple in principle though, duh. The battle against bandwidth hogging roomates rages on. Could I just set the local IP range to their IP, and then set the Local port range to every single port in existence. Then prioritize their IP to lower than mine? Will this work? What are all the ports? And what's the difference between Local and Remote IPs or Ports? Name: Roomate, Priority: Low, Protocol: TCP or UDP ??, Local IP Range: .101 to .101, Local Port Range: 0 to infinity, Remote IP Range: ? to ?, Remote Port Range: ? to ?

    Read the article

  • Everythings changes, except that it doesn&rsquo;t

    - by Dennis Vroegop
    You may have noticed this. Microsoft launched a new product this week. Well, launched is a strong word, they announced it. They call it.. The Surface! What is it? Well, it’s a cool looking family of tablets designed for Windows 8. And I have to say: they look stunning and I can’t wait to have one. There’s just one thing.. The name..Where have I heard that before? Why, indeed! Yes, it is also the name of a new paradigm in computing: Surface Computing. You may have read something about that, here on this blog for instance. Well, in order to prevent confusion they have decided to rename Surface to Pixelsense. You know, the technology that drives the ‘camera’ inside the Samsung SUR40 for Microsoft Surface…. So now, when we talk about Surface, we mean the new tablet. When we talk about Pixelsense we mean that big table… So there you have it… Welcome PixelSense! For more info see http://www.surface.com for the tablet and http://www.pixelsense.com for Pixelsense.

    Read the article

  • on Red Hat Enterprise and CentOS, what is creating /var/run/reboot-required?

    - by EdwardTeach
    On CentOS 5.8+ and Red Hat Enterprise 6+, when installing/updating packages, I notice a flag file /var/run/reboot-required is created when appropriate. On Ubuntu (and Debian too, I'm guessing), if package "update-notifier-common" is installed, a package postinst script triggers creation of this flag file. On RHEL/CentOS I can't figure out how this is happening. For instance, on RHEL and CentOS I recently installed several updates and /var/run/reboot-required was created. One of them was an "openssl" package upgrade. I assume this was what created the flag file, since on Ubuntu it also works this way. However I looked at all "rpm -q --scripts" for each updated package, and didn't see anything that was likely to have created that flag file. Mostly I saw "postinstall program: /sbin/ldconfig". So my questions are: What creates this flag file on RHEL/CentOS? Does it require a special package to be installed, analogous to the "update-notifier-common" package on Ubuntu?

    Read the article

  • How to setup a new website with Amazon EC2?

    - by ElHaix
    For a new EC2 instance, I setup a windows server with IIS. I added the Amazon name servers to my on my domain, and configured an elastic IP pointing to the server. I know this is working as I use this for RDC. On the server, I added the website tied to the IP address, and used the quicklink security group that has port 80 open. However, whenever I try going to the URL, I pretty much get nothing, and not sure where the blockage is occurring. Any suggestions? Thanks.

    Read the article

  • Laptop wakes from sleep, once, due to audio controller (Windows 7)

    - by stijn
    The laptop is a recent Dell XPS 15z and the problem is as follows (reproducible about 90% of tries): put laptop to sleep using either Start-Sleep or closing the lid laptop goes to sleep after about 5 seconds, but instantly wakes again showing a black screen (touching the keyboard or moving the mouse shows the login screen one normally gets after wake) login again, put laptop to sleep latop stays in sleep mode output of powercfg -lastwake after the first instant wake shows the audio controller is responsible. Why would that be, why only the first try, and how to fix this? Wake History Count - 1 Wake History [0] Wake Source Count - 1 Wake Source [0] Type: Device Instance Path: PCI\VEN_8086&DEV_1C20&SUBSYS_04461028&REV_05\3&11583659&0&D8 Friendly Name: Description: High Definition Audio Controller Manufacturer: Microsoft

    Read the article

  • How to make a pxe bootable 10MegaByte or larger dos image?

    - by rjt
    Would like to make a "massive" DOS floppy disk image, say 10MegaBytes or more containing all the firmware updates i need for any system, harddrive, BIOS. i do not need the DOS image to be networkable as everything will be on the PXE booted image, but networking would be nice. Since ZipDisks were attached to the floppy disk controller and were over 100MegaBytes, this should be possible. i tried a long time ago to do this and spent too much time on it only to have it fail to boot. So if someone has reliable instructions on how to create such a nightmarish beast and edit it, please let me know. One image that can used for PXE and copied to a USB stick would be a plus. Too bad manufacturers don't supply a single bootable Linux ISO containing all their firmware updates, that would be easy to boot over-the-lan and have networking. HP servers do this and it is awesome.

    Read the article

  • Static error page served by nginx when my application is down

    - by dreeves
    If my (Rails) application is down or undergoing database maintenance or whatever, I'd like to specify at the nginx level to server a static page. So every URL like http://example.com/* should server a static html file, like /var/www/example/foo.html. Trying to specify that in my nginx config is giving me fits and infinite loops and whatnot. I'm trying things like location / { root /var/www/example; index foo.html; rewrite ^/.+$ foo.html; } How would you get every URL on your domain to serve a single static file?

    Read the article

  • Mono on Linux: Apache or Nginx

    - by Furism
    Hi, I'm developing an ASP.NET application that will be run under Linux/Mono for various reasons (mostly to stay away from IIS, quite frankly). Of course the first web server I had in mind was Apache. But Apache, for all its advantages, adds a lot of overhead. Also, the application I'm building needs to be highly scalable and performance is one of the main concern. Apache has, obviously, a very good reputation and its record speaks for itself, but I don't need things like Reverse Proxy or Load Balancing because dedicated network devices would be used for that. So those modules from Apache will never be used. So basically my question is: since Nginx seems to fit exactly needs, is there any caveat I should be aware of? For instance, is Nginx renowned to be particularity safe? When security flaws are detected, how fast are they patched? Any insight on the pros and cons of using either of those servers in conjunction with Mono is welcome.

    Read the article

  • Photo/Video gallery for Ubuntu web server

    - by Andrew
    I'm trying to have a gallery that can display images as well as videos for visitors to my web server. I'm running Ubuntu 12.10, and have Apache installed. All my images and videos are in /var/www/media. I've taken a look at bbgallery, which is simple enough for me, but I don't think it supports video. I've also looked at Single File PHP Gallery, but it doesn't support video. Does anyone know of a gallery that supports video as well, which I can use for my web server? EDIT: I do not have a database.

    Read the article

  • git/gitolite: big git repo with several mini projects

    - by Jay
    I'm pretty new to the whole version control thing, and even more so with git. I recently installed git on my computer(s) and set it up on a NAS server. However, I have several client folders with several project folders per client folder. Each one of these client folders is a giant repo, encompassing every project inside it. What I'm wondering is, is there a way to break this apart? So, for instance: The NAS is my 'origin', and has gitolite installed On computer1 I have every project folder in a client folder ever created (clean branch), In computer2 I do not a new checkout of the client branch (because all the projects in that branch are all completed and I don't need a working copy of it), but I do have a brand new project folder for that client "newproject". Is there a way to commit and push to the NAS repo from computer2? Or perhaps is there a better way of organizing all this?

    Read the article

  • VMWare Guest NIC Teaming

    - by Justin Popa
    We're looking to add additional bandwidth to a VM running on an ESXi cluster running 5.1. How can I team these within the VM? I suspect I need to add a second e1000 and then install some Intel software to team them. Any idea which version of Intel driver? Is there some better software to use? EDIT: Sorry, neglected some information. The guest OS is Win2k8R2. The physical NICs on the host are 1Gbps. The reason this has come up is we are seeing the VM hitting near cap on the capability of a single 1Gbps link (Usually at 100-110MBps, bursting to 130s, but I think that may just be a UI math lie) and we're interested in seeing if adding an additional NIC in a teamed setting will increase the overall throughput.

    Read the article

  • Create mix CDs from MP3 files

    - by Dave Jarvis
    How would you write a script (preferably for the Windows commandline) that: Examines thousands of MP3 files stored on a single drive (e.g., G:\) Randomizes the collection Populates a series of directories up to 650MB worth of songs (without exceeding 650MB) Every song is shucked exactly once (Optional) The directory size comes as close as possible to 650MB The DIR, COPY, and XCOPY commands have no explicit file size switches. A few Google searches have come up with: File size condition in DOS Cygwin and UWIN DOS File sizes It would be ideal if UNIX-like environments can be avoided. My question, then: How do you compare file (or directory) sizes using the Windows commandline?

    Read the article

  • MaxClients in apache. How to know the size of my proccess?

    - by Larry
    From http://httpd.apache.org/docs/2.2/misc/perf-tuning.html The single biggest hardware issue affecting webserver performance is RAM. A webserver should never ever have to swap, as swapping increases the latency of each request beyond a point that users consider "fast enough". This causes users to hit stop and reload, further increasing the load. You can, and should, control the MaxClients setting so that your server does not spawn so many children it starts swapping. This procedure for doing this is simple: determine the size of your average Apache process, by looking at your process list via a tool such as top, and divide this into your total available memory, leaving some room for other processes. The main issue is that I can't understand how to know the size, because, well i have the size of httpd on no more of 3888 But, if we need to determine the number for MaxClients, and I have 4GB of RAM, so I get: 972, so I should use like 900 in the MaxClients?

    Read the article

  • Shell Script Launching Child Processes

    - by Matt James
    Disclaimer: I'm totally new to shell scripting, but have quite a bit of experience in other languages like PHP and Obj-C. I'm writing my first daemon script. Here are the goals: I want it to run in the background I want it to be triggered by an init.d script that includes start/stop/restart commands I want each process in a loop to trigger its own subprocess. When the parent process kicked off by the init.d script is killed, I want the subprocesses to die as well. Essentially, I'm looking for the same kind of behavior that appears to be very common among software like apache, spamd, dovecot, etc. But, based on my research, I haven't found a single, simple answer as to how this kind of thing is achieved. Any help is greatly appreciated.

    Read the article

  • Mac OS X Server add server user

    - by Meltemi
    What's the recommended way to add a user to Mac OS X Server that doesn't need all the hoopla associated with Workgroup Manager? There are many users pre-configured in Mac OS X Server (www, root, ldapadmin, etc.) that don't have "Full Name" or mail accounts, etc. I'd like to create a 'svn' user to be the owner of our Subversion Repository as per this tutorial: If you've decided to use either Apache or stock svnserve, create a single svn user on your system and run the server process as that user. Be sure to make the repository directory wholly owned by the svn user as well. From a security point of view, this keeps the repository data nicely siloed and protected by operating system filesystem permissions, changeable by only the Sub- version server process itself. Wondering if there's a way outside of WorkgroupManager and OpenDirectory as this account will be entirely server based. Is this still sound advice under OS X Server? If so what's the easiest way to create the user (Mac OS X Server doesn't seem to respond to useradd).

    Read the article

  • Bandwidth Control on our Internet Connection

    - by AlamedaDad
    Hi all, I have Covad dual/bonded T1 service in our office coming through a Cisco 1841 and then through a Sonicwall 3060Pro/Enhanced SW firewall. The problem I'm looking for some input on is how to limit the amount of bandwidth any single user/PC can user for downloading a file from the Internet. It's become an issue that when one person happens to download let's say an ~300MB file, normal internet access for the other employees slows to a crawl. I've seen through MRTG that in fact usage of the circuit jumps to the full 3mb for the duration of the download and then drops. Is it possible to control this? I'm not familiar with QOS or the like so I'm not sure. Any help on this would be appreciated. Thanks...Michael

    Read the article

  • Merge only a one remote branch into a local branch with Mercurial

    - by Pepijn
    I wan to manage some profiles as XML files in Mercurial repos. The setup I'm thinking of: Each user has a repo with a branch where he manages his own profile, and a number of branches where he can pull and merge other profiles from that branch of another user. So for example I have my own profile branch and a branch labeled friends, in which I want to pull the profile branches of a few remote repos, to collect like a collection of profiles. I figured out that since the repos are unrelated I need to use -f, but I can't figure out how to pull and merge only a single branch into another. So I want like me friend someone profile ---> friends <--- profile \-> family friends <--- profile Is this even possible? Should I use separate repos instead? Is there a better solution?

    Read the article

  • How to disable PHP's ini_set for specific configuration options?

    - by Dunedan
    I'm running a setup with PHP 5.3.8 and use php-fpm with it's chroot functionality to separate multiple customers. So each customer has it's own chrooted PHP-environment, which is quite fine. I now want to disallow that a customer can change the *memory_limit* of his PHP-instance by using *ini_set*. On the other hand I don't want to disable *ini_set* completely. So I'm searching for a possibility to disable the possibility to set specific PHP configuration options (like *memory_limit*) via *ini_set*. Does somebody know how to achieve that?

    Read the article

  • Wiping out user and/or root password in embedded linux

    - by TryTryAgain
    We have a security camera system running an embedded linux. It boots with Lilo as a bootloader and has no tty access once booted. I don't know any username either. SSH/22 is open, but I don't think brute force is an option. I have tried all the common tricks to reset a linux user password (boot from the bootloader in single user mode = doesn't happen, still prompts for user login, boot to a live cd = can't access the file system...it's all loop files and other binary, etc etc), but they are all not possible as it is an embedded linux setup the way it is. Any help/suggestions would be appreciated. Thanks

    Read the article

  • ASP.NET Connections Spring 2012 Talks and Code

    - by Stephen.Walther
    Thank you everyone who attended my ASP.NET Connections talks last week in Las Vegas. I’ve attached the slides and code for the three talks that I delivered: Using jQuery to interact with the Server through Ajax– In this talk, I discuss the different ways to communicate information between browser and server using Ajax. I explain the difference between the different types of Ajax calls that you can make with jQuery. I also discuss the differences between the JavaScriptSerializer, the DataContractJsonSerializer, and the JSON.NET serializer. ASP.NET Validation In-Depth– In this talk, I distinguish between View Model Validation and Domain Model Validation. I demonstrate how you can use the validation attributes (including the new .NET 4.5 validation attributes), the jQuery Validation library, and the HTML5 input validation attributes to perform View Model Validation. I then demonstrate how you can use the IValidatableObject interface with the Entity Framework to perform Domain Model Validation. Using the MVVM Pattern with JavaScript Views – In this talk, I discuss how you can create single page applications (SPA) by taking advantage of the open-source KnockoutJS library and the ASP.NET Web API. Be warned that the sample code is contained in Visual Studio 11 Beta projects. If you don’t have this version of Visual Studio, then you will need to open the code samples in Notepad. Also, I apologize for getting the code for these talks posted so slowly. I’ve been down with a nasty case of the flu for the past week and haven’t been able to get to a computer.

    Read the article

  • What sort of attack URL is this?

    - by Asker
    I set up a website with my own custom PHP code. It appears that people from places like Ukraine are trying to hack it. They're trying a bunch of odd accesses, seemingly to detect what PHP files I've got. They've discovered that I have PHP files called mail.php and sendmail.php, for instance. They've tried a bunch of GET options like: http://mydomain.com/index.php?do=/user/register/ http://mydomain.com/index.php?app=core&module=global§ion=login http://mydomain.com/index.php?act=Login&CODE=00 I suppose these all pertain to something like LiveJournal? Here's what's odd, and the subject of my question. They're trying this URL: http://mydomain.com?3e3ea140 What kind of website is vulnerable to a 32-bit hex number?

    Read the article

  • Licensing a JavaScript library

    - by Kendall Frey
    I am developing a free, open-source (duh) JavaScript library, and wondering how to license it. I was considering the GNU GPL, but I heard that I must distribute the license with the software, and I'm not sure anymore. I would like the library to be available much like jQuery: In a free, downloadable script, preferably in either original or minified form. Am I mistaken about the GNU GPL license terms? jQuery is dual licensed under GNU GPL or MIT licenses. How does the GPL apply to single script files like that? Can I license my library with nothing more than a few sentences in the script file? Is there another license that better suits my needs? What would be nice is a license that allows you to put the URL in the source, for people to read if they want. I don't know that many do, unless I am mistaken. I am generally looking to release the library as free software like the GPL specifies, but don't want to have to force licensees to download the full license unless they wish to read it.

    Read the article

  • Scripting a database copy from MS Sql 2005 to 2008 without detach/backup/RDP

    - by James Santiago
    My goal is to move a single SQL 2005 database to a seperate 2008 server. The issue is my level of access to both servers. On each I can only access the database and nothing else. I cant create a backup file or detach the database because I don't have access to the file system or to create a proxy. I've tried using the generate script function of sql 2005 management studio express to restore the schema but receive command not supported errors when attempting to execute the sql on the new database. Similarly I tried using EMS SQL Manager 2005 Lite to script a backup of the schema and data but ran into similar problems. How do I go about acomplishing this? I can't seem to find any solutions outside of using the detach and backup functions.

    Read the article

  • TRADACOMS Support in B2B

    - by Dheeraj Kumar
    TRADACOMS is an initial standard for EDI. Predominantly used in the retail sector of United Kingdom. This is similar to EDIFACT messaging system, involving ecs file for translation and validation of messages. The slight difference between EDIFACT and TRADACOMS is, 1. TRADACOMS is a simpler version than EDIFACT 2. There is no Functional Acknowledgment in TRADACOMS 3. Since it is just a business message to be sent to the trading partner, the various reference numbers at STX, BAT, MHD level need not be persisted in B2B as there is no Business logic gets derived out of this. Considering this, in AS11 B2B, this can be handled out of the box using Positional Flat file document plugin. Since STX, BAT segments which define the envelope details , and part of transaction, has to be sent from the back end application itself as there is no Document protocol parameters defined in B2B. These would include some of the identifiers like SenderCode, SenderName, RecipientCode, RecipientName, reference numbers. Additionally the batching in this case be achieved by sending all the messages of a batch in a single xml from backend application, containing total number of messages in Batch as part of EOB (Batch trailer )segment. In the case of inbound scenario, we can identify the document based on start position and end position of the incoming document. However, there is a plan to identify the incoming document based on Tradacom standard instead of start/end position. Please email to [email protected] if you need a working sample.

    Read the article

  • What are the benefits of running chef-server instead of chef-solo?

    - by strife25
    I am looking at automated deployment solutions for my team and have been playing with Chef for the past few days. I've been able to get a simple web app up an running from a base Red Hat VM using chef-solo. Our end goal is to use Chef (or another system) to automatically deploy application topologies to the cloud as we run builds. Our process would basically run like so: Our web app code, dependencies, and chef cookbooks are stored in SCM A build is executed and greats a single package for images to acquire and test against The build engine then deploys new cloud images that run a chef client to get packages installed. The images acquire the cookbooks from SCM or the Chef server and install everything to get up and running What are the benefits and/or use cases for getting a Chef Server running? Are there any major benefits to have a Chef Server hold and acquire the cookbooks from SCM vs. using chef-solo and having a script that will pull the cookbooks from SCM?

    Read the article

< Previous Page | 701 702 703 704 705 706 707 708 709 710 711 712  | Next Page >