Search Results

Search found 19533 results on 782 pages for 'machine specifications'.

Page 449/782 | < Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >

  • Rackspace Cloud Servers in Europe?

    - by mit
    We have setup a cloud virtual server at rackspace in the US, but we use it from Europe. I found out I am not quite happy with the response time. Of course I knew that there would be some latency. But I am not sure if it is the overseas latency (ping is 120ms) or also the minimal resources. It is the smallest machine, 256 MB, 10 GB, running a Mediawiki on Ubuntu 10.04 64 bit. The Instance lives in the rackspace ORD1 datacenter. As soon as they have opened their new facilities in the UK we plan moving the incstance there. But we are planing more machines already. The pricing is quite attractive. I don't really want to do some measuring and benchmarking and this stuff, so I am asking just for your opinions and it would be nice to hear what you can tell from your experience. Maybe someone who uses such small instances in the US. And what can we really expect if we upgrade to more resources.

    Read the article

  • I just got a linode VPS a week ago and I've been flagged for SSH scanning

    - by meder
    I got a 32-bit Debian VPS from http://linode.com and I really haven't done any sort of advanced configuration for securing it ( port 22; password enabled ). It seems somehow there is ssh scanning going on from my IP, I'm being flagged as this is against the TOS. I've been SSHing only from my home Comcast ISP which I run Linux on. Is this a common thing when getting a new vps? Are there any standard security configuration tips? I'm quite confused as to how my machine has been accused of this ssh scanning.

    Read the article

  • Synchronizing git repository with post-receive hook

    - by eliocs
    Hello, I have a redmine server and a gitolite server on the same machine. I want Redmine's GIT repository to get updated when a commit is registered. I thought of adding a post-receive script that updates the repository: post-receive: cd home/redmine/repositories/repo git pull this doesn't work because the script is run by the gitolite user instead of the redmine user owner of the repository cloned folder. How can I change the user that executes the script inside a batch script?, is there a cleaner way of updating the repository? thanks in advance.

    Read the article

  • MSCC: Purpose and benefits of Version Control Systems (VCS)

    You're working in IT and not using any kind of version control system? Sorry, then you're doing something wrong! RSVP for MSCC meetup of June This month's meetup will be an introduction into the mechanics of version control systems (VCS) like git, Mercurial, TFS, and others in general. VCS are not optional but compulsory in any area of IT. Whether you're developing source code for the next buzz app, writing SQL scripts for your database, or automating your administrative tasks with shell scripts it's better to have a "time machine" in order to keep multiple version, stay organised and leverage the power of differences. git - a modern approach to VCS - Nayar Nayar is going to give us a brief overview of the basic principles of working with git. Which are the necessary steps to get started and which are the usual commands in order to get the most out of git. Visual Studio Online (VSO) - Jochen Are you mainly rooted on the Windows platform and looking for a good alternative to Team Foundation Services (TFS), then VSO might give you hand at achieving this. Similar to git VSO is an open infrastructure but plays very well together with the Microsoft Azure cloud infrastructure. Recent and upcoming events in Mauritius Let's have a chat about recent events like WebCup 2014 or Emtel Knowledge Series and have a head start on upcoming events like Code Challenge, and others to come. Networking and general discussions Of course, there will be plenty of time to chat and exchange with other like-minded craftsmen. Bring your topics and discuss various issues with other professionals. Share your experience and use the ability to learn from others. Looking forward to meet soon.

    Read the article

  • ext4 jbd2 journaling active even on empty filesystem

    - by Paul
    I have been having several issues with my ext4 filesystems that seem to be due to jbd2 journaling. I made a related post here and am rephrasing it with the hope that someone may be able to help. For a minimal example, I start with an empty 8gb USB stick and use gparted to create one ext4 partition. The command used by gparted when creating the ext4 file system is: mkfs.ext4 -j -O extent -L DataTraveler8gb /dev/sde1 I check the filesystem with gparted: e2fsck -f -y -v /dev/sde1 and I mount it: sudo mount /dev/sde1 /media/test The disk is empty, but the journaling is very active on this disk (/dev/sde1). The other disks are ext4 SSDs formatted similarly. A snapshot of iotop: % sudo iotop -oPa Total DISK READ: 0.00 B/s | Total DISK WRITE: 2027.21 K/s PID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 262 be/3 root 0.00 B 56.00 K 0.00 % 0.18 % [jbd2/sda1-8] 29069 be/3 root 0.00 B 0.00 B 0.00 % 0.16 % [jbd2/sde1-8] 891 be/3 root 0.00 B 4.00 K 0.00 % 0.03 % [jbd2/sdc1-8] What is jbd2 doing with /dev/sde1? If I follow the same steps with a larger 2Tb disk, iotop indicates this empty disk is constantly being written to by jbd2 at the rate of Mb/s as soon as I mount it. On the other disks, which have the OS and /home, I have tried to find if any files are being modified by processes to cause this behavior but couldn't find any. I also moved many of the disk intensive process to use a tmpfs. And used noatime. I have another non-SSD hard disk on this machine, /dev/sdb, that is also ext4 but was not formatted by gparted (given to me by a coworker). It does not appear in iotop. So I am assuming there an issue with gparted. Any suggestions are appreciated. Also any tips on how to modify existing partitions to fix the issue without having to start from scratch would be great. There are some posts related to jbd2 but they didn't help (eg. here).

    Read the article

  • UEFI/GPT Win 7 Load Failure in Dual Boot and no GRUB2 [Ubuntu 12.04]

    - by cristian_jordache
    Configuration: MBB: ASRock X79 Extreme6 Win 7 installed on a INTEL 40GB SSD (GPT partitioned) Ubuntu 14.04 on a CORSAIR 30GB SSD (Ext4 and SWAP) I had Windows 7 installed previously in UEFI mode, using 3 partitions (GPT) and works fine if left alone. In UEFI BIOS settings I can see sometimes a "Windows Boot Manager" and other times (?) a "UEFI Intel" entry for INTEL HDD and Windows will boot properly selecting the one available at that time. I installed Ubuntu 14.04 after Win 7 w/o changing any UEFI BIOS settings and it works fine only if the BIOS is set w/ the Ubuntu partition as the first drive to boot, in AHCI mode. If both SSD drives are connected, the Win7 Intel boot drive can be chosen as first boot device but only as an "AHCI Intel drive" (No "Windows Boot Manager" nor "UEFI Intel device" options available in BIOS Boot menu) and Win7 will not load properly as long as the Ubuntu Crucial SSD is NOT PHYSICALLY DISCONNECTED. Windows will try, start booting for few seconds but will fail replacing Win7 logo and that startup animation with w/ the "old" white progress bar and then and will notify that there is a issue and prompt the user to try to Load Win 7 in Normal Mode again or try a Recovery Mode to fix it. If I let Windows INTEL HDD boot via BIOS/UEFI - Windows Boot manager selection, I may see the purple screen of Grub2 loaded for a while, but there's no selection for Ubuntu or Windows and/or then machine is not booting, showing a black screen and a small command prompt cursor blinking on top. So far the only option I see to have Ubuntu boot side by side w/ Win 7 is to reformat the Win7 SDD and set it boot in legacy BIOS mode with a MBR instead of GPT. Per my understanding this is a quite complex issue to fix (Rod Smith's answer was pretty helpful: UEFI boot on my Asus k52f) but any other suggestions are welcome. I find a bit odd that I can boot properly Windows7 SSD or an Ubuntu DVD using a DVD drive set in UEFI-BIOS in "AHCI mode" and w/ using "UEFI/Windows Boot Manager" booting option but I cannot boot a secondary SSD-HDD w/ Ubuntu having the same BIOS/UEFI Boot configuration. Looks like plugging the second SSD [the Ubuntu partition] is interfering with boot options in UEFI-BIOS.

    Read the article

  • Speedup vmware esx guest hdd access

    - by Uwe
    Hello, we run several windows servers and windows clients on our vmware esx. One of the Windows 2003 Servers is a build-server with major HDD-reads/writes. as it is our build server. This machine was a hardware before and was virtualized to the ESX. Is there any way to increase the HDD-Performance? Perhaps there are special windows (guest) drivers? The files are stored on a Raid6 base. Performance graph of vmware infrastructure client shows reads up to 650 KBps and writes up to 4000 KBps. Thank you. Regards, Uwe

    Read the article

  • My View on ASP.NET Web Forms versus MVC

    - by Ricardo Peres
    Introduction A lot has been said on Web Forms and MVC, but since I was recently asked about my opinion on the subject, here it is. First, I have to say that I really like both technologies and I don’t think any is going away – just remember SharePoint, which is built on top of Web Forms. I see them as complementary, targeting different needs and leveraging different skills. Let’s go through some of their differences. Rapid Application Development Rapid Application Development (RAD) is the development process by which you have an Integrated Development Environment (IDE), a visual design surface and a toolbox, and you drag components from the toolbox to the design surface and set their properties through a property inspector. It was introduced with some of the earliest Windows graphical IDEs such as Visual Basic and Delphi. With Web Forms you have RAD out of the box. Visual Studio offers a generally good (and extensible) designer for the layout of pages and web user controls. Designing a page may simply be about dragging controls from the toolbox, setting their properties and wiring up some events to event handlers, which are implemented in code behind .NET classes. Most people will be familiar with this kind of development and enjoy it. You can see what you are doing from the beginning. MVC also has designable pages – called views in MVC terminology – the problem is that they can be built using different technologies, some of which, at the moment (MVC 4) do not support RAD – Razor, for example. I believe it is just a matter of time for that to be implemented in Visual Studio, but it will mostly consist on HTML editing, and until that day comes, you have to live with source editing. Development Model Web Forms features the same development model that you are used to from Windows Forms and other similar technologies: events fired by controls and automatic persistence of their properties between postbacks. For that, it uses concepts such as view state, which some may love and others may hate, because it may be misused quite easily, but otherwise does its job well. Another fundamental concept is data binding, by which a collection of data can be fed to a control and have it render that data somehow – just thing of the GridView control. The focus is on the page, that’s where it all starts, and you can place everything in the same code behind class: data access, business logic, layout, etc. The controls take care of generating a great part of the HTML and JavaScript for you. With MVC there is no free lunch when it comes to data persistence between requests, you have to implement it yourself. As for event handling, that is at the core of MVC, in the form of controllers and action methods, you just don’t think of them as event handlers. In MVC you need to think more in HTTP terms, so action methods such as POST and GET are relevant to you, and may write actions to handle one or the other. Also of crucial importance is model binding: the way by which MVC converts your posted data into a .NET class. This is something that ASP.NET 4.5 Web Forms has introduced as well, but it is a cornerstone in MVC. MVC also has built-in validation of these .NET classes, which out of the box uses the Data Annotations API. You have full control of the generated HTML - except for that coming from the helper methods, usually small fragments - which requires a greater familiarity with the specifications. You normally rely much more on JavaScript APIs, they are even included in the Visual Studio template, that is because much less is done for you. Reuse It is difficult to accept a professional company/project that does not employ reuse. It can save a lot of time thus cutting costs significantly. Code reused in several projects matures as time goes by and helps developers learn from past experiences. ASP.NET Web Forms was built with reuse in mind, in the form of controls. Controls encapsulate functionality and are generally portable from project to project (with the notable exception of web user controls, those with an associated .ASCX markup file). ASP.NET has dozens of controls and it is very easy to develop new ones, so I believe this is a great advantage. A control can inject JavaScript code and external references as well as generate HTML an CSS. MVC on the other hand does not use controls – it is possible to use them, with some view engines like ASPX, but it is just not advisable because it breaks the flow – where do Init, Load, PreRender, etc, fit? The most similar to controls is extension methods, or helpers. They serve the same purpose – generating HTML, CSS or JavaScript – and can be reused between different projects. What differentiates them from controls is that there is no inheritance and no context – an extension method is just a static method which doesn’t know where it is being called. You also have partial views, which you can reuse in the same project, but there is no inheritance as well. This, in my view, is a weakness of MVC. Architecture Both technologies are highly extensible. I have writtenstarted writing a series of posts on ASP.NET Web Forms extensibility and will probably write another series on MVC extensibility as well. A number of scenarios are covered in any of these models, and some extensibility points apply to both, because, of course both stand upon ASP.NET. With Web Forms, if you’re like me, you start by defining you master pages, pages and controls, with some helper classes to glue everything. You may as well throw in some JavaScript, but probably you’re main work will be with plain old .NET code. The controls you define have the chance to inject JavaScript code and references, through either the ScriptManager or the page’s ClientScript object, as well as generating HTML and CSS code. The master page and page model with code behind classes offer a number of “hooks” by which you can change the normal way of things, for example, in a page you can access any control on the master page, add script or stylesheet references to its head and even change the page’s title. Also, with Web Forms, you typically have URLs in the form “/SomePath/SomePage.aspx?SomeParameter=SomeValue”, which isn’t really SEO friendly, no to mention the HTML that some controls produce, far from standards, optimization and best practices. In MVC, you also normally start by defining the master page (or layout) and views, which are the visible parts, and then define controllers on separate files. These controllers do not know anything about the views, except the names and types of the parameters that will be passed to and from them. The controller will be responsible for the data access and business logic, eventually relying on additional classes for this purpose. On a controller you only receive parameters and return a result, which may be a request for the rendering of a view, a redirection to another URL or a JSON object, to name just a few. The controller class does not know anything about the web, so you can effectively reuse it in a non-web project. This separation and the lack of programmatic access to the UI elements, makes it very difficult to implement, for example, something like SharePoint with MVC. OK, I know about Orchard, but it isn’t really a general purpose development framework, but instead, a CMS that happens to use MVC. Not having controls render HTML for you gives you in turn much more control over it – it is your responsibility to create it, which you can either consider a blessing or a curse, in the later case, you probably shouldn’t be using MVC at all. Also MVC URLs tend to be much more SEO-oriented, if you design your controllers and actions properly. Testing In a well defined architecture, you should separate business logic, data access logic and presentation logic, because these are all different things and it might even be the need to switch one implementation for another: for example, you might design a system which includes a data access layer, a business logic layer and two presentation layers, one on top of ASP.NET and the other with WPF; and the data access layer might be implemented first using NHibernate and later on switched for Entity Framework Code First. These changes are not that rare, so care should be taken in designing the system to make them possible. Web Forms are difficult to test, because it relies on event handlers which are only fired in web contexts, when a form is submitted or a page is requested. You can call them with reflection, but you have to set up a number of mocking objects first, HttpContext.Current first coming to my mind. MVC, on the other hand, makes testing controllers a breeze, so much that it even includes a template option for generating boilerplate unit test classes up from start. A well designed – from the unit test point of view - controller will receive everything it needs to work as parameters to its action methods, so you can pass whatever values you need very easily. That doesn’t mean, of course, that everything can be tested: views, for instance, are difficult to test without actually accessing the site, but MVC offers the possibility to compile views at build time, so that, at least, you know you don’t have syntax errors beforehand. Myths Some popular but unfounded myths around MVC include: You cannot use controls in MVC: not true, actually, you can, at least with the Web Forms (ASPX) view engine; the declaration and usage is exactly the same as with Web Forms; You cannot specify a base class for a view: with the ASPX view engine you can use the Inherits Page directive, with this and all the others you can use the pageBaseType and userControlBaseType attributes of the <page> element; MVC shields you from doing “bad things” on your views: well, you can place any code on a code block, at least with the ASPX view engine (you may be starting to see a pattern here), even data access code; The model is the entity model, tied to an O/RM: the model is actually any class that you use to pass values to a view, including (but generally not recommended) an entity model; Unit tests come with no cost: unit tests generally don’t cover the UI, although there are frameworks just for that (see WatiN, for example); also, for some tests, you will have to mock or replace either the HttpContext.Current property or the HttpContextBase class yourself; Everything is testable: views aren’t, without accessing the site; MVC relies on HTML5/some_cool_new_javascript_framework: there is no relation whatsoever, MVC renders whatever you want it to render and does not require any framework to be present. The thing is, the subsequent releases of MVC happened in a time when Microsoft has become much more involved in standards, so the files and technologies included in the Visual Studio templates reflect this, and it just happens to work well with jQuery, for example. Conclusion Well, this is how I see it. Some folks may think that I am being too rude on MVC, probably because I don’t like it, but that’s not true: like I said, I do like MVC and I am starting my new projects with it. I just don’t want to go along with that those that say that MVC is much superior to Web Forms, in fact, some things you can do much more easily with Web Forms than with MVC. I will be more than happy to hear what you think on this!

    Read the article

  • Remote Desktop Client Crashes following domain join

    - by Roberto Charlie Ciarleglio
    I recently joined my laptop to our windows domain and now the remote desktop client crashes when i try and connect to any machine. It works if I run as administrator but not ordinarily. The domain join migrated my local profile to the domain profile which i think is where the problem lies. I'm guessing its a permission thing as I had a similar problem with dropbox and had to delete reg keys and reinstall. I can't figure out how to fix this problem though. The event viewer shows this: Faulting application name: mstsc.exe, version: 6.1.7601.17514, time stamp: 0x4ce7ab44 Faulting module name: FACredProv2.dll, version: 2.4.95.1, time stamp: 0x4bb8d766 Exception code: 0xc0000005 Fault offset: 0x00000000000025b2 Faulting process id: 0xb24 Faulting application start time: 0x01cd43fbd3a81fba Faulting application path: C:\Windows\System32\mstsc.exe Faulting module path: C:\Windows\System32\FACredProv2.dll Report Id: 154ee55a-afef-11e1-a443-b8ac6f704c5d any help would be appreciated!

    Read the article

  • Linux wget, how to display progress percentage after session has reload?

    - by skyrail
    I am running debian Squeeze in console mode on a plug computer. I control it opening an SSH session from a Windows machine, on the same local network. I started downloading a large file using wget. What I get is a console progress bar showing the percentage of data downloaded, file size, and download rate. When I close the session, debian is still running and downloading. Fine. But When I close and reopen a session, how can see which amount of data was downloaded, using a linux command ? thanks.

    Read the article

  • What's the best way to know if your web server goes down?

    - by Mike Christensen
    I noticed my website only got 8 visitors today, which means it probably went down very early this morning and I never noticed. Why it went down is another story. Ideally, I'd like to be emailed if my web server becomes unresponsive or does not return an HTTP 200. Is there either a cloud-based service (either free or pretty cheap) that can monitor your website? If not, is there a good free/open source program I can run on either a Linux or Windows machine that will monitor a website and email me if it goes down? Thanks!

    Read the article

  • Can it be harmful to grant jackd realtime priority?

    - by SuperElectric
    I am apt-get installing Ardour, a sound mixing program, just to try it out. Installing Ardour also installs JACK, a dependency. As part of the JACK installation script, I get the following dialog: If you want to run jackd with realtime priorities, the user starting jackd needs realtime permissions. Accept this option to create the file /etc/security/limits.d/audio.conf, granting realtime priority and memlock privileges to the audio group. Running jackd with realtime priority minimizes latency, but may lead to complete system lock-ups by requesting all the available physical system memory, which is unacceptable in multi-user environments. Enable realtime process priority? I'm installing on my laptop, which never has multiple simultaneous users. I still have concerns: is JACK something that'll be used by the system itself to play any sound (i.e. will it replace ALSA)? If so, does that mean that if I enable realtime priority for JACK, I'll run a slight risk of freezing the machine whenever any sound is played? Or is JACK only going to be used by Ardour for now (until I install some other JACK-dependent program)? Thanks, -- Matt

    Read the article

  • Ubuntu not connecting to network in Hyper-V

    - by soandos
    I am unable to connect the Ubuntu guest (both 12.10 and 12.04) to the internet via hyper-V. Here is what I have done so far (with much thanks due to @Kronos's blog post on the topic): Created a switch in the switch manager with connection set to external, selected my wifi card (Intel Centrino Ultimate-N 6300 AGN). If it matters, the Microsoft Filtering Platform is checked under extensions. Added this switch to my Ubuntu guest. I also tried a different wireless card (Aethros 9285) same issue. Connecting through my wired card works just fine (assume that I select that card, and I am wired in of course). Making it a legacy network adapter does not fix the issue. Ubuntu can see this connection, but is unable to connect to it. What follows is what I attempted to do to get Ubuntu to connect: Start and restart the network manager Restart the machine Verify that it could in fact see the adapter (resulted in device not ready a few times) How can I get this to work properly?

    Read the article

  • NAS4Free disk encryption and ZFS error

    - by MiNT
    I have installed a NAS4Free on a VM, and as recommended, i installed it on a 1GB virtual disk, and assigned another disk 500GB to this VM for file storage. I have created the disk, encrypted it, created a ZFS virtual disk, and then a ZFS storage pool. Everything was working. On every restart of this VM i needed to go on and mount the encrypted drive. Recently i upgraded the host machine, and now i cant mount or make it work. I have tried removing everything and setting up from scratch everything, with the exception of formating the disk, i have used an encrypted one without formating it. Does anybody have any suggestion on how can i at least get back my data, can i mount somehow the encrypted drive even in another utility, just need to get back the data that were on it.

    Read the article

  • Wireless Access Point - Can't ping other machines on the wireless network

    - by Surfer513
    I have a wireless access point (Netgear), and I have it setup so that it has an IP address in the current subnet (let's say 192.168.2.0, subnet mask of 255.255.255.0). The machine that it is connected to via ethernet cable has an IP in the same subnet as the AP. The machines that are connected to the AP via the wireless connection also have an IP address in the same subnet as the rest of the network (192.168.2.0). All machines can ping the access point, but they cannot ping each other. I don't totally understand why, because there is connection and all of the machines are in the same subnet. I realize this is a layer 3 device, but is there an issue because of this AP's lack of gateway capabilities? (i.e. no routing table, etc.)

    Read the article

  • Cannot add VMDK to VM that was cloned with FlexClone

    - by Daniel Lucas
    I have a virtual machine called VM-A with two VMDK's on volume VOL-A. Call these VMDK-1 (system) and VMDK-2 (data). I want to clone VMDK-2 and attach that clone to VM-A as a new disk, but I'm getting an error. Here are my steps: I use the following command to clone : clone start /vol/VOL-A/VMDK-2 /vol/VOL-A/VMDK-3 Run clone status which shows successful and I can see the new file in the volume In vCenter I edit the settings of VM-A and try to add VMDK-3, but get the following error: Failed to add disk scsi0:4, Failed to power on scsi0:4 I've tried adding this cloned disk to other VMs and get the same error. What could be the issue? My specs are below. NetApp FAS 2040 Data ONTAP 8.0.1 vSphere ESXi 4.1 vCenter Server 4.1 Thanks, Daniel

    Read the article

  • Yay! Oracle Solaris 11.1 Is Here!

    - by rickramsey
    Even the critters are happy. This is no cosmetic release. It's got TONS of new stuff for both system admins and system developers. In the coming weeks and months I'll highlight specific new capabilities, but for now, here are a few resources to get you started. What's New (pdf) Describes enhancements for sysadmins in: Installation System configuration Virtualization Security and Compliance Networking Data management Kernel/platform support Network drivers User environment And for system developers: Preflight Applications Checker Oracle ExaStack Labs (available to Oracle Partner Network Gold-level members for application certification) Oracle Solaris Studio Integrated Java Virtual Machine (JVM): Updates are now managed using the Image Packaging System (IPS) Migration guides and technology mapping tables for AIX, HP-UX and Red Hat Linux: Download Free downloads for SPARC and x86 are available, along with instructions and tips for using the new repositories and Image Packaging System. Tech Article: How to Upgrade to Oracle Solaris 11.1 You can upgrade using either Oracle's official Solaris release repository or, if you have a support contract, the Support repository. Peter Dennis explains how. Documentation Superbly written instructions from our dedicated cadre of world-renowned but woefully underpaid technical writers: Getting Started Installing, Booting, and Updating Establishing an Oracle Solaris Network Administering Essential Features Administering Network Services Securing the Operating System Monitoring and Tuning Creating and Using Virtual Environments Working with the Desktop Developing Applications Reference Manuals And more Training And don't forget the new online training courses from Oracle University! I really liked them. Here are my first and second impressions. Website Newsletter Facebook Twitter

    Read the article

  • Change environment variables as standard user (Windows 7)

    - by SealedSun
    When clicking on "Advanced system settings", I need to login as the administrator and hence only edit the administrators environment variables (in addition to the machine wide ones). How do I edit the environment variables of a standard user? Details With the migration to Windows 7, I decided to work as a standard user instead of an unprivileged administrator. Works well so far but I encountered a tiny problem: When I try to change per user environment variables via the control panel I have to login as an administrator. But since I run that part of the control panel as the administrator I can only edit the administrators variables. How am I supposed to edit my own environment variables? Without resorting to extreme measures, such as editing the registry (as suggested in "Is there any command line tool that can be used to edit environment variables in Windows?" )

    Read the article

  • Server Configuration / Important Parameter for 500Req/Second

    - by Sparsh Gupta
    I am configuring a server to be used as nginx server for a very heavy traffic website. It is expected to receive traffic from a large number of IP addresses simultaneously. It is expected to get 500Req/Second with atleast 20Million unique IPs connecting it. One of the problems I noticed in my previos server was related to iptables / ipconntrack. I am not aware of this behaviour and would be glad to know which all parameters of a ubuntu / debian (32/64) bit machine should I tweek to get maximum performance from the server. I can put in a lot of RAM on the server but mission critical task is the response times. We ideally dont want any connection to be hanging / timing out / waiting and want as low as possible overall response times. P.S. We are also looking for a kick ass freelancer system admin who can help us figuring / setting this all up. Reach me incase you have some spare time and interested in working on some very heavy traffic website servers.

    Read the article

  • IPSEC Windows 2008 <--> Fortinet 60B

    - by Elijah Glover
    I am trying to establish a IPSEC vpn, between an office DSL connection and a single virtual machine. I have done hub-spoke stuff before with cisco and fortinet routers, never hardware <-- software. Fortigate 60B - 10.20.1.1/24 Windows Server 2008 r2 Installed On VM I have seen some guides, to do this with juniper screenos (guide uses first release of 2008, they introduced windows firewall with advanced security), but none using fortinet equipment. Anyone ever been successful? Or should I install RAS/PPTP, so I can dial in?

    Read the article

  • Windows 2008R2 blocks outbound LDAP for non-admins?

    - by Jon Bailey
    I've got a Windows 2008R2 terminal server with ~30 users on it. It's joined to a Samba-based domain. During the login script, we connect directly to the LDAP server to pull out certain profile information. This used to work just fine. Now, it doesn't, but only for non-local-admin accounts. Local admins work fine. As a non-local-admin: Connection to ports 389 or 636 just terminate (wireshark on the LDAP server reveals no connection attempt) Connection to other ports on the same server work fine Same thing on multiple LDAP servers Windows firewall is disabled Can't find any other rules/policies that may block this I suspect since this used to work, it came down during an update, but for the life of me, I can't find what. EDIT: I just ran Wireshark on the machine and didn't see anything when connecting to the LDAP server in question (or any LDAP server for that matter). I can, however, see traffic when I connect to that server on another port.

    Read the article

  • Dynamips Virtual Router not pinging external host after bridging

    - by maiky
    I have setup a virtual router image with dynamips and setup bridging between tap0 interface of the virtual router and eth0 and br0 with commands [root@cisco_host]# brctl addbr br0 [root@cisco_host]# ifconfig br0 up [root@cisco_host]# ifconfig eth1 0.0.0.0 [root@cisco_host]# brctl addif br0 eth1 [root@cisco_host]# ifconfig br0 192.168.0.100 netmask 255.255.255.0 up In dynagen configuration I have: f0/0 = NIO_tap:tap0 So, [root@cisco_host]# brctl addif br0 tap0 [root@cisco_host]# ifconfig tap0 up router(config)#int fa0/0 router(config-if)#ip address 192.168.0.101 255.255.255.0 router(config-if)#no shutdown After the configuration above I was expecting that from inside the router I could ping an external machine with IP 192.168.0.1 Actually from the host I can ping the external host 192.168.0.1 as well as 192.168.0.100 and 192.168.0.101. So what am I missing here? tap0 is bridged with br0 and in turn with eth1. So why the router is not pingable from 192.168.0.1 and vice-versa??

    Read the article

  • Problem modifying read-only files on Samba NAS

    - by Felix Dombek
    Hi, I have files on a Samba server in the local company network and accessing them from a Windows Vista machine. Usually, if I want to delete a directory containing write-protected (read-only) files, Windows would ask "This file is read-only, are you sure?". However, when I do this with a dir on the server, Windows just tells me that I need permissions. The workaround is to remove the read-only flag from the directory and all contained files and then deleting. However, I have a TortoiseSVN versioned dir on the server, and the .svn dirs contain read-only files. I need to remove the read-only flags from the dir before every commit, or else it fails. This is quite distressing and shouldn't be so. Does someone know how to attack this problem? (If someone knows how to tell TortoiseSVN to not make its files read-only, that would probably be ok as well) ... Thanks!

    Read the article

  • How to fix? => Your system administrator does not allow the user of saved credentials to log on to the remote computer

    - by Pure.Krome
    At our office, any of our Windows 7 Clients get this error message when we try and RDP to a remote W2K8 Server outside of the office :- Your system administrator does not allow the user of saved credentials to log on to the remote computer XXX because its identity is not fully verified. Please enter new credentials A quick google search leads to some posts they all suggest I edit group policy, etc. I'm under the impression, that the common fix for this, is to follow those instructions -per Windows7 machine-. Ack :( Is there anyway I can do something via our office Active Directory .. which auto updates all Windows 7 clients in the office LAN?

    Read the article

  • How to get diagnostic information on a failing Windows Server 2012 install

    - by Tobius Maximus
    I am trying to install a copy of Windows Server 2012 on two machines. Both machines go through the initial "Files loading" progress bar and get to the new Windows Server 2012 logo. However, the installation (on both machines) will immediately fail, and the machine will restart. I am trying to figure out if there's a way to display diagnostic information about the error before or after restarting, but Google is not helping me at all. Is there a key combination I can hit that will display an error log, perhaps?

    Read the article

< Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >