Search Results

Search found 32343 results on 1294 pages for 'good practice'.

Page 385/1294 | < Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >

  • Why is my connection slow?

    - by Jay R.
    I have a Dell Precision T5400 with a Broadcom 1Gb onboard NIC. For some strange reason, when I access machines on our local network, the best I can get is around 125KB/s download speed. My laptop that has a 10/100Mb NIC onboard usually gets around 300KB/s or better from the same network resource. Both machines are plugged into the same 1Gb switch which connects to our local network wall jack at 100Mb half duplex. There is also a printer plugged into the same switch at 100Mb full. The resource I'm using for the test is a 30MB zip file copied from a jetty webserver that is running as part of a cruisecontrol installation. The cruisecontrol installation is running WindowsXP with full real-time antivirus and Altiris patch management and inventory running. That stuff on its own is eating some of the download speed. I've seen the laptop reach into the multiple MB/s download speed before, but the desktop never seems to get past 125KB/s to 130KB/s. In WindowsXP, before I upgraded the driver in the desktop, it was that slow. In Fedora, it is still slow even though it appears to be using the same driver version as the upgraded Windows driver. The upgraded Windows driver is faster, but still not nearly as fast as the laptop. What gives? Any insight to improve the situation would be appreciated. Could it be that the BroadCom board just isn't that good, or the driver in linux is just not as good as the Windows one?

    Read the article

  • Sandra reports my CPU as "Engineering Sample", how can I be sure this is correct?

    - by stevenvh
    I ran SiSoftware's Sandra on my new PC, and for my CPU it reports: Generation : G8 / T29 Name : TN0 (Trinity) FX/Opteron 32nm (ES) Revision/Stepping : 0 : 10 / 1 Stepping Mask : TN-A1 Microcode : MU6F10010F The (ES) is a well-known code in product development, meaning "Engineering Sample". Those are beta versions of the CPU, which still may contain some bugs, or even have features switches off. I contacted both the PC's manufacturer Medion as well as AMD about this. I had to downvote the Medion helpdesk here. The person I talked to boldly said Sandra was wrong (without knowing how Sandra got this information; he didn't even know the software), and used the word "impossible". His conclusion was "We’re not taking this in consideration for service”. Right. So, if you like Medion for their good prices, but like good support even better, you may consider buying your PC elsewhere. AMD was more helpful, but wanted to be sure before replacing the part (which I find reasonable). They suggested that I dismount the cooler from the CPU to check what was printed on it to be sure. I'm a bit reluctant here: I would have to wipe the thermal paste from the CPU, and won't know for sure my cooling will still be OK afterwards. Questions Has anybody actually found a confirmed ES CPU in her PC? Is anybody aware of Sandra erroneously reporting CPUs as Engineering Samples? How can you tell an ES, apart from the print on the package? Shouldn't Stepping Mask identify the CPU uniquely?

    Read the article

  • How to test server throughput

    - by embwbam
    I've always used apache benchmark to try to get a rough idea of how many requests/second my server can handle. I read that it was good, and it seemed to work well. Enter node.js, which is fully event-based, so it never blocks. If I run apache benchmark on a simple hello world server it can handle 2500 requests per second or so. However, if I put a timeout in the hello world function, so that it responds after 2 seconds, apache benchmark reports a dramatically reduced throughput: about 50/s. I'm running 100 concurrent connections with ab. If I increase the concurrency, it goes up. This makes sense, because apache benchmark is basically sending out requests in batches of 100, which come back every 2 seconds. 100 requests / 2 seconds = 50 requests / second If I increase the concurrency to about 400 or 500, it starts to crash. I don't think I've hit node.js's limit, I think I'm hitting a wall in my operating system on the number of open file descriptors or sockets or something. Any way I can get a good guess about how many requests my server can handle? I want to make sure the test computer isn't the one causing the problem.

    Read the article

  • Add shortcuts to (Windows + X) context menu

    - by KasiyA
    I want to add services.msc into Win+X context menu in windows 8 (x64). I know similar question is in here but it's not good with using Win+X Editor, because it doesn't add Underlined key for shortcuts that added with that and it's not good without having quickly underlined key. I want do that for maually Context menu folder is: C:\Users\User_Name\AppData\Local\Microsoft\Windows\WinX And hide desktop.ini files is as bellows (in ...\WinX\group2\desktop.ini) [LocalizedFileNames] 1 - Run.lnk=@%SystemRoot%\system32\shell32.dll,-12710 4 - Control Panel.lnk=@%SystemRoot%\system32\shell32.dll,-4161 5 - Task Manager.lnk=@%SystemRoot%\system32\authui.dll,-12139 3 - Windows Explorer.lnk=@%SystemRoot%\system32\shell32.dll,-22067 2 - Search.lnk=@%SystemRoot%\system32\shell32.dll,-30517 I copied sevices.msc shortcut into above path in group2 folder and add this line 6 - Sevices.lnk=@%SystemRoot%\system32\sevices.msc,????? in desktop.ini file. First Question: I don't know If this line 6 - Sevices.lnk=@%SystemRoot%\system32\sevices.msc,-????? that I added is correct or not? Also I don't know what to use instead of -????? Last Question: Why desktop.ini contents is not Sorted. I triyed to manually sort them but when I restart Explorer again it was become out of order.Why?

    Read the article

  • Using a Level 2 switch as a core switch

    - by imtech
    I have a small user base of about 20 people on at a time and spiking up to about 80 people during peak times. Most people (80+%) are connected over our Aruba managed wireless system. We have a Windows Domain. We have 3 24-Port switches all connecting back to a central 48-port switch where additional access ports, firewall, servers, and wireless controller all centrally connect back to. It's a flat network with dumb switches. I'm in the process of upgrading our infrastructure. Cisco pricing for switches is pretty high for us so I've been looking at HP Procurves which seem to be within our budget range. I want to eventually make use of 802.1x, SNMP, QoS for possible VOIP upgrades, VLAN to separate guest VLAN from authenticated users, and other more advanced features. PoE would be nice but that's probably too expensive for us. I was thinking of having our core switch be a Procurve 2610 and the rest of our switches that centrally connect to it be Procurve 2510s. A true and full blown level 3 switch is way out of our price range but a 2610 seems to be good enough for us. The 2610 does static routing which ought to be good enough for us but I'm in unfamiliar territory so I'm looking for any gotchas. Also, should all the switches be 2610s or just the core switch? Do I even need the 2610, can I just go with all 2510s? I'm new to VLANs as well so I'm not sure what it is I need but I would like an affordable infrastructure that won't need replacing 2-3 years down the line because I choose a product that was lacking.

    Read the article

  • why does mysql have so many more open and fragmented tables than tables in the DB?

    - by kswift
    I've been working making our database run a little smoother and had good results over the past week. But there are still some things I dont understand. For one thing, the database has 25 tables. But mysql status shows 512 are open: mysqladmin status Uptime: 212854 Threads: 1 Questions: 43041 Slow queries: 7 Opens: 2605 Flush tables: 1 Open tables: 512 Queries per second avg: 0.202 I've read that isam opens extra file descriptors and a few other reasons why the number of open tables might be higher than 25, but I am guessing that 512 is not a good thing. Any suggestions on why this might be or what I should be looking into? I've also been using mysqltuner and its been helpful. But it has consistently listed the number of fragmented tables at 207. In phpmyadmin I've selected all the tables and optimized them several times. It hasn't reduced the number of fragmented tables that mysqltuner reports. I think I am missing some important concept about how this all works. Does anyone have any suggestions to point me in the right direction or narrow down google searches or just generally help me be less clueless? Thanks!

    Read the article

  • Motherboard running rather hot while gaming

    - by I take Drukqs
    Case: Antec 1200 Mobo: Gigabyte GA-X58A-UD3R CPU: Intel i7 950 (stock cooler) GPU: EVGA GeForce 570 GTX RAM: 2x 2 GB (4 GB total) DDR3 dual-channel Corsair OS: Windows 7 Home Premium 64-bit This is my first build and it's brand new. I had no problems putting it all together in a few hours one evening and I consider myself to be pretty good with computers. Not to brag or anything like that! Just saying I've been fiddling with them since I was in diapers and I have a good amount of experience under my belt, just not with certain things yet. Recently while playing many of the latest games maxed out without a hitch my motherboard has been running hot and like anyone who's ever built a computer it scares the life out of me. I checked HWMonitor and saw that my motherboard sometimes reached temperatures of around 52 - 78c (the number 78 obviously being what's scaring me). I was wondering if such a temperature is normal and if not what the problem could be. Air flow in my case is phenomenal and besides having to ship back a faulty GPU and reseat my CPU my first build has been a very large success which I am enjoying tremendously. There is literally almost no dust in my case due to it being very new as previously mentioned and my RAM sticks are in the correct slots for dual-channel mode. My cable management is pretty great in my opinion with only cables from my PSU lingering in the bottom of the case. At any given opportunity I ran my cables behind my mobo. Air flow should definitely not be a problem because my CPU only goes up to about 60c and my GPU only goes up to about 80c. Thank you very much in advance.

    Read the article

  • Affordable combined Ruby/Rails/Redmine + Subversion hosting?

    - by Pekka
    I'm a self employed web developer and after nine years of hard work, I'm looking to become a bit more "vagrant" starting next year, do some much-needed traveling and a bit and work off and on, making use of one of the greatest advantages of a programming job: The ability to work virtually from everywhere. For that, I am looking for a reliable hosting company I can entrust my code to in the form of a number of Subversion repositories, and an installation of the Redmine project management tool. As my financial situation may vary during traveling, I am looking for something I can pay up front for a year or two, and is obviously not too pricey. I don't care where the company is located, as long as it's trustworthy and solid, meaning it's not likely to go out of business next month. Does anybody know good recommendations? Preferably from own, personal, good experience. I have looked at CVSDude / Codesion and while they are certainly great, they don't offer Redmine of course, and seem to be aiming toward bigger organizations mainly. What I would need: 2-5 Gigs of space minimum, freely distributable between SVN, and Redmine attachments Unlimited number of Subversion projects Access control (team members / checkout-only accounts / etc.) I don't mind configuring the svn settings on file basis myself I need the possibility to map a custom domain to the package that is hosted elsewhere Frequent backups and access to those backups through FTP or other means I have been running my own virtual server for this until now, but I don't want the hassle, especially on the security side, while I may not always have the internet connection to fix problems that may come up.

    Read the article

  • a VPS mail server

    - by microspino
    Hello I'm trying to substitute citadel on my Virtual Private Server with something more simple. I dislike their documentation and the webmail client. I don't need any groupware feature. I need only an MTA with a nice looking web interface, SPAM and VIRUS check. I recently found the lamson project from Zed Shaw. Is that production ready? Do you had any real and good experience with It? On the latest-news page I see that the last release dates december 2009. Sorry for my lack of knowledge, I'm really new to mail servers but I have to find a solution to manage sending and receiving mail on my VPS. I would accept also to build my VPS email server using a linux system like exim, postfix or whatever but I have really small needs and they will not grow in at least a year and i will be the only one user. I'm searching for something that I could build and manage easily, as I'm a novice linux sysadmin. Having also some good documentation or at least a robust step by step guide would be a plus.

    Read the article

  • Debian, How to convert filesystem from ISO-8859-1 into UTF-8?

    - by Johan
    I have a old pc that is running Debian stable, that is in need of a upgrade. The problem is that it is using latin1 (ISO-8859-1) for everything, and since the rest of the world has moved to UTF-8 I plan to convert this computer as well. And for this question I will focus in on the files that are served with Samba, and some has some latin1 characters in the filenames (like åäö). Now my plan is to move all data of this old computer onto and a brand new one that is running Debian stable (but with UTF-8). Does anybody have a good idea? Thanks Johan Note: later I plan to use iconv to convert the content of some files with something like this: iconv --from-code=ISO-8859-1 --to-code=UTF-8 iso.txt > utf.txt However I don't know of a good way to convert the filesystem it self. Note: Normally I usaly just scp from one computer to the next, but then I end up with latin1 characters in the utf-8 filesystem... Update: Did a small test round with a hand full of files (with funny chars) in the filenames, and that seemed like it could work. convmv -r -f ISO-8859-1 -t UTF-8 * So it was only to execute with the --notest convmv -r -f ISO-8859-1 -t UTF-8 --notest * Nothing more to it.

    Read the article

  • Connection problems via Wifi (multiple devices)

    - by Kelvin Farrell
    I'm having connection issues with my router (Linksys WRT610N) at home. There are a number of things that are happening (may be more, this is just what I've mainly noticed)... 1) Using my laptop (Macbook Pro OSX Lion), I am unable to complete any operations with my external FTP server, hosted with FatCow. I can connect to it, navigate through all the files, but when I try to edit/delete/add a file the operation times out. EVERY time. I've used two other Wifi connection on my laptop and neither have this issue. 2) I am unable to upload photos/videos to Facebook or Twitter using my phone (Samsung Galaxy S2) or my tablet (HP Touchpad - CM9). Neither am I able to upload files to Dropbox via either of the devices. Same thing happens in all situations; the upload will begin and it will just hang on 0% forever. After about 10 mins I am always forced to disconnect the Wifi to stop the action. 3) My laptop is having slow internet speed, even though we are on 20mb broadband. Speedtests say I'm getting a good connection and my Ping is good, but when using streaming services like Spotify, it takes forever to load a page and frequently stops to buffer whilst playing a song. Don't know if it's worth mentioning but I have no issues with my XBox (Ethernet), AppleTV (Wifi) or my girlfrield's phone (Nokia Lumia 800 - WP7.5) on the network. I'd really appreciate any help. This is driving me insane and is really affecting both my working and leisure use of the internet.

    Read the article

  • SQL Server Offsite Backups

    - by Eric Maibach
    We have about !TB of SQL Server databases, and these databases generate about 200GB of data changes each day. Up to this point we have been doing Weekly full backups, daily diff backups, and hourly transaction log backups. The full and diff backups are backed up to tape and taken offsite each day. We have been trying to move away from tapes, and our IT department purchased a Barracuda Backup device that backups up data and then sends it offsite using our internet connection. I have been trying to get this to work for our SQL Server backups, and have ran into a number of problems. I normally like to just use SQL Server to perform backups instead of trying to use a agent, so that is what I tried first. However the Barracuda device was not able to dedup these files very well, so it ended up being to much data to try to send offsite and to archive. I then tried installing the Barracuda agent and using it to backup the SQL Server databases. However the problem I am having there is that on some of the database servers I also have files that need backed up, and I cannot find a way to create seperate backup schedules for the file backups and the SQL Server backups. Barracuda only does full or transaction log backups. So if I want to do hourly transaction log backups I end up doing a file system backup every hour (which is not good), or if I only schedule the backups to run once a night I either have to do a full backup every night, or only do a transaction backup once a day. None of these scenarios are good options. My question is, how is everyone else getting their large SQL Server database backups offsite. Are you just using tape, or have you found a offsite backup device that works well? Is anybody else using Barracuda to backup their SQL Server databases? If you do, then how do you have it setup?

    Read the article

  • Will a 2.4Ghz WAP intefere with a 5.0Ghz WAP if placed directly next to each other

    - by Dan
    This is mostly a curiosity question to people who know more about radio and wi-fi than I. The 2.4Ghz band is massively overpopulated near my house to the point of sometimes getting 1000ms pings to the router from only a few feet away. inSSIDer finds at least 10 broadcasting SSID's within around 15 seconds of starting, so this isn't a real surprise to me! Sometimes I can get good results by changing the channel to something like 3 or 8, but it's usually temporary as the others use Auto Channel and hop around. Now, the router I have is capable of 5.0Ghz, as is the laptop I type this on. Switching to 5.0Ghz gives superb results: I can download at ~90Mbps and get consistent 1ms pings. The problem is that only this laptop supports 5.0Ghz! My question: Would I still get decent 5.0Ghz performance if I place a 2.4Ghz access point directly next to my router? And, indeed, will 2.4Ghz continue working as 'normal'? Testing would be an obvious step, but I threw all my superfluous equipment out in a recent house move. My understanding is that I should get good performance, certainly in comparison to having two devices using the same frequency range, but I do believe there will be some impact by the virtue of them being directly next to each other. (Cabling is not an option due to it being a rented house)

    Read the article

  • AWS VPC ELB vs. Custom Load Balancing

    - by CP510
    So I'm wondering if this is a good idea. I have a Amazon AWS VPC setup with a public and private subnets. So I all ready get the Internet Gateway and NAT. I was going to setup all my web servers (Apache2 isntances) and DB servers in the private subnet and use a Load Balancer/Reverse Proxy to pick up requests and send them into the private subnets cluster of servers. My question then, is Amazons ELB's a good use for these, or is it better to setup my own custom instance to handle the public requests and run them through the NAT using nginx or pound? I like the second option just for the sake of having a instance I can log into and check. As well as taking advantage of caching and fail2ban ddos prevention, as well as possibly using fail safes to redirect traffic. But I have no experience with their ELB's, so I thought I'd ask your opinions. Also, if you guys have an opinion on this as well, would using the second option allow me to only have 1 public IP address and be able to route SSH connections through port numbers to respective instances? Thanks in advance!

    Read the article

  • What are some of the best answer file settings for a WDS Deployment?

    - by drpcken
    I've had my head buried in answer files for days now and have gotten quite comfortable setting them up, test, etc... I use a handful of Components to help my migrations, for my unattend.xml I like: Windows-International-Core-WinPE -- this is good for setting Locales the preboot environment (en-us for us english US speakers). Keeps me from having to set these on the initial image boot. Windows-Setup_neutral -- I like the WindowsDeploymentServices -> ImageSelection, especially if I'm only pushing a single image. This keeps me from having to select it each time. My OOBE_Unattend.xml is really useful and I barely have to touch anything during this part of the installation: Windows-Shell-Setup_neutral -- This lets me put a ProductKey in for my MAK volume license (very useful and time saving). I can also set the TimeZone for the installation. Windows UnattendedJoin_neutral -- I couldn't live without this component. It joins the machine on my domain before logging in as a domain administrator. I would hate to not have this ability. Windows-International-Core -- Again this component really speeds up the OOBE process. I configure my locals and time zone so I don't have to do it by hand when the machine enteres OOBE. Windows-Shell-Setup -- Allows you to configure an autologon when the new machine is finished. I like to logon as a domain admin automatically for customizing and troubleshooting the new machine immediately after it is imaged. Also the OOBE component under here lets me skip the EULA, Hide Wireless Setup, and set my default NetworkLocation. All of this makes the entire OOBE totally automated. What are some other good components I am missing as far as helping me get these images pushed and configured as quickly as possible?

    Read the article

  • DNS propagation delay or bad configuration?

    - by Javier Martinez
    I have been waiting the DNS propagation for almost 24 hours. I'am no impatient, but I want to know if I configured my zone good or I have any error in it. I think that is good, because if I use my server dns like my DNS secondary I can resolve and lookup host well. ; ; BIND data file for mydomain.net ; $TTL 86400 @ IN SOA mydomain.net. mydomain.net. ( 20120629 ; Serial 10800 ; Refresh 3 hours 3600 ; Retry 1 hour 604800 ; Expire 1 week 86400 ) ; Negative Cache TTL ; @ IN NS ns1 @ IN NS ns2 IN MX 10 mail ns1 IN A 5.39.X.Y ns2 IN A 5.39.X.Z There is not any errors in /var/syslog about bind daemon. Is everything correct? Do I only need to wait up to 48 hours for the right DNS propagation? My nslookup from a remote machine with the nameserver of the bind host: $ nslookup mydomain.net Server: bind-host-ip Address: bind-host-ip#53 Name: mydomain.net Address: domain-ip

    Read the article

  • Getting rid of your server in a small business environment

    - by andygeers
    In a small business environment, is it still necessary to have a central server? Speaking for my own company (a small charity with about 12 employees) we use our server (Windows Server 2003) for the following: Email via Microsoft Exchange Central storage Acting as a print server User authentication / Active Directory There are significant costs associated with running a server like this: Electricity, first for the server itself then for the air conditioning required (this thing pumps out a lot of heat) Noise (of which there is a lot) IT support bills (both Windows Server and Exchange are pretty complicated, and there are many ways they can go wrong) I've found ways to replace many of these functions with cheaper (better?) alternatives: Google Apps / GMail is a clear win for us: we have so many spam related problems it's not even funny, and Outlook is dog slow on our aging computers You can buy networked storage devices with built in print servers, such as the Netgear ReadyNAS™ RND4210 that would allow us to store/share all of our documents, and allow us to access printers over the network The only thing that I can't figure out how to do away with is the authentication side of things - it seems to me that if we got rid of our server, you'd essentially have a bunch of independent PCs that had no shared pool of user accounts / no central administrator. Is that right? Does that matter? Am I missing any other good reasons to keep a central server? Does anybody know of any good, cost-effective ways of achieving the same end but without the expensive central server?

    Read the article

  • Is VGA port hot-pluggable?

    - by Martin Bøgelund
    In meetings, I often see people detaching the VGA connector from one running laptop and connecting it to another, while the projector is still on. Is this 100% risk free, and OK by design of the VGA standard? If there's a risk involved in hot-plugging VGA, can it be removed by turning off or suspending either laptop, display, or both? I see this being done all the time without causing disaster, so clearly I'm not interested in answers stating "we do it all the time, so it should be OK!". I want to know if there's a risk - real or in theory - that something breaks when doing this. EDIT: I did an internet search on the topic, and I never found a clear statement as to why it is safe or unsafe to hot swap VGA devices. The typical form is a forum question asking basically the same question as I did, and the following types of statements Yes it's hot swappable! I do it all the time! It involves some kind of risk, so don't do it! You're some kind of moron if you think there's a risk, so just do it! But no explanation as to why it safe or not... Joe Taylors answer below contains a link to a forum post and answers that basically give me the same statements as mentioned above. But again, no good explanation why. So I looked for an actual manual for a projector, and found "Lenovo C500 Projector User’s Guide". It states on page 3-1: Connecting devices Computers and video devices can be connected to the projector at the same time. Check the user’s manual of the connecting device to confirm that it has the appropriate output connector. [image] Attention: As a safety precaution, disconnect all power to the projector and devices before making connections. But again, no good explanation.

    Read the article

  • Blocking HTTPS and P2P Traffic

    - by Genboy
    I have a Debian server running at the gateway level on a LAN. This runs squid for creating block lists of websites - for eg. blocking social networking on the LAN. Also uses iptables. I am able to do a lot of things with squid & iptables, but a few things seem difficult to achieve. 1) If I block facebook through their http url, people can still access https://www.facebook.com because squid doesn't go through https traffic by default. However, if the users set the gateway IP address as proxy on their web browser, then https is also blocked. So I can do one thing - using iptables drop all outgoing 443 traffic, so that people are forced to set proxy on their browser in order to browse any HTTPS traffic. However, is there a better solution for this. 2) As the number of blocked urls increase in squid, I am planning to integrate squidguard. However, the good squidguard lists are not free for commercial use. Anyone knows of a good squidguard list which is free. 3) Block yahoo messenger, gtalk etc. There are so many ports on which these Instant Messenger softwares work. You need to drop lots of outgoing ports in iptables. However, new ports get added, so you have to keep adding them. And even if your list of ports is current, people can still use the web version of gtalk etc. 4) Blocking P2P. Haven't been able to figure out how to do this till now.

    Read the article

  • Software/hardware to build video streaming server?

    - by Sasha Yanovets
    I am looking for a video streaming server solution, something like online TV server, with ability to make live broadcasts in the internet. What software could you recommend for that? What kind of hardware it should run on, should be there anything special? I am looking for a solution that could be scaled up to at least 1000 simultaneous users online with good resolution of video. I think it is good to have general answer on what direction to choose. But here more details on my specific case: I just looking for a solution almost from scratch. We have some video content that we've produced, but it is not delivered over internet yet. We do not tied to any particular vendor for now. We want to make 24 hours of steaming three 8 hour blocks with change of content every day. We want the ability to make regular live broadcasts. I guess we will need to have several options of streaming quality (low ~56 kb/s mid ~273 kb/s). Some terms just foreign to me (like play-truncation rate), if you could point out what parameters we should avare of, it would be great. Uplink to the internet is to be determined. We plan to start from something and scale up on the way. If you are already have some kind of media streaming server, just describe its configuration here (hardware, OS, software), peak number of concurrent users it serves. I think it could help people approaching this task.

    Read the article

  • Kindly guide me to buy a new laptop [on hold]

    - by Its me 007
    I am from India. I want to buy a new laptop. Shortlisted few but confused between which processor,Chip set and Graphics will be the best suited for my requirements. NOTE: NOT ABLE TO POST THE LINKS YOU WILL HAVE TO COPY PASTE IT. SORRY. 1) HP Pavilion 15-N004TX - 4th Gen CI5 - 4200U/4GB RAM/500 GB HDD/ 1GB Radeon Graphic - Rs 39990 www.homeshop18.com/hp-pavilion-15-n004tx-laptop-4th-gen-intel-core-i5-4200u-4gb-500gb-15-6-linux-silver-black/computers-tablets/laptops/product:30989197/cid:16317/ 2) Lenovo Essential G510 (59-398452) - 4th Gen Ci5 4200M/ 4GB/ 500 GB/Win8/2GB Graph ATI Sunpro 8570 - Rs 44969 www.flipkart.com/lenovo-essential-g510-59-398452-laptop-4th-gen-ci5-4gb-500gb-win8-2gb-graph/p/itmdp26eprwf5k5v?gclid=CMnh99GA2LoCFaRU4godNiUAGQ&semcmpid=sem_7847244212_laptopsnew_goog&tgi=sem%2C1%2CG%2C7847244212%2Cg%2Csearch%2C%2C24387103114%2C1t1%2Cb%2C%2Blenovo+%2Bg510%2F59+%2B398452%2Cc%2C%2C%2C%2C%2C%2C%2C2 3) HP Pavilion G6-2303TX Laptop (3rd Gen Ci5 3230M/ 4GB/ 500GB/ DOS/ 1GB Graph) - Rs 40500 www.flipkart.com/hp-pavilion-g6-2303tx-laptop-3rd-gen-ci5-4gb-500gb-dos-1gb-graph/p/itmdm6yzh4gr4cxd?pid=COMDM6YHWMGDRDEZ&ref=1d2b85fc-a03d-4c7d-844b-ec9e8dc95a81 4) HP Pavilion 15-E039TX Laptop (3rd Gen Ci5 3230M/ 4GB/ 1TB/ Win8/ 2GB Graph) - Rs 46690 www.flipkart.com/hp-pavilion-15-e039tx-laptop-3rd-gen-ci5-4gb-1tb-win8-2gb-graph/p/itmdn4d9wykhdcpz?pid=COMDN4CZGFMGJNTN&ref=1d2b85fc-a03d-4c7d-844b-ec9e8dc95a81 Now I am confused between: Which Processor and chipset is best? How much graphic card is enough? (Not a gamer) Is any of this laptop future proof i.e. it should at least support upcoming latest programming softwares which eats more processor and memory. Laptop will be mainly used for multiprocessing.It should be at least capable for following: Visual Studio 2012 and the upcoming versions for at least 4 years SQL server 2008 R2 and above Sharepoint Blend Photoshop Kindly suggest. If anyone know any good laptop with good configuration in the 50k budget kindly suggest. Thanks in advance.

    Read the article

  • Does the OSS Backup Solution amanda.org support sparse files?

    - by user97961
    I want to (or better have to) do Backups of my KVM Virtual Machine images. I have searched for days for a good Backup Soloution. I know amanda is a very good solution. It would be kinf if someone kenn tell me if the following is supported: Trigger the Creation of LVM Snapshot (by invoking a Shell Script that I will write for that purpose) Do a Differential/Delta Backup on my KVM LVM qcow2 sparse file. = I only want to copy the actually changed bits/bytes (=Delta Backup). And it has to support that the file to be backuped up is a sparse file. (Rsync seems to have some kind of problems in regard to this (if the file does not exist yet on the other side... Then it will create a full file, not a sparse file)) Release the LVM Snapshot (By invoking a Script that I will write for that purpose) It's strange, I have nowhere found any documentation about this fact when searching the internet. Zmanda (Commercial Edition) has support vom XEN VM Backup (but not for KVM as far as I can tell)...

    Read the article

  • How to set up multiple SSIDs with bandwidth limiting on a single wireless router?

    - by Rahul Narain
    I have an Asus WL-520GU wireless router connected to a cable modem that I use for wireless internet access in my apartment. I would like to set it up so that it provides two SSIDs: one secured and password-protected for my regular use, and a "guest" SSID that's unsecured but throttled to, say, 10% of the available bandwidth. What is the most straightforward way to do this? I've been looking into DD-WRT and Tomato, both of which support my router. DD-WRT supports setting up multiple SSIDs using the GUI, but I don't know if it's possible to limit the bandwidth of each SSID independently; point #12 in this FAQ thread says it's not possible to limit by day or by MAC address, which is discouraging but not conclusive. Tomato allows bandwidth limits in its QoS settings, going by the screenshot here, but multiple SSID support is still experimental and it doesn't look like it will work with the encryption settings or bandwidth limits in the GUI. I'd like to know a good way to do this which gives me the fewest opportunities for screwing up. I'm no stranger to the command line, if that turns out to be what's necessary, but if so, please explain what the commands are doing because I don't have a good mental model of what needs to happen to set this up.

    Read the article

  • Manipulating IIS7 Configuration with Powershell

    - by John Price
    I'm trying to script some IIS7 setup tasks in PowerShell using the WebAdministration module. I'm good with setting simple properties, but am having some trouble when it comes to dealing with collections of elements in the config files. As an immediate, example, I want to have application pools recycle on a given schedule, and the schedule is a collection of times. When I set this up via the IIS management console, the relevant config section looks like this: <add name="CVSupportAppPool"> <recycling> <periodicRestart memory="1024" time="00:00:00"> <schedule> <clear /> <add value="03:31:00" /> </schedule> </periodicRestart> </recycling> </add> In my script I would like to be able to accomplish the same thing, and I have the following two lines: #clear any pre-existing schedule Clear-WebConfiguration "/system.applicationHost/applicationPools/add[@name='$($appPool.Name)']/recycling/periodicRestart/schedule" #add the new schedule Add-WebConfiguration "/system.applicationHost/applicationPools/add[@name='$($appPool.Name)']/recycling/periodicRestart/schedule" -value (New-TimeSpan -h 3 -m 31) That does almost the same thing, but the resulting XML lacks the <clear/> element that is created with via the GUI, which I believe is necessary to avoid inheriting any default values. This sort of collection (with "add", "remove", and "clear" elements) seems to be fairly standard in the config files, but I can't seem to find any good documentation on how to interact with them consistently.

    Read the article

  • SQL Server architecture - they want to move my database to new instance...Why?

    - by O'MALLEY
    Our current production database environment contains about 10 similarily managed databases. Our agency has just purchased and is installing new blade chasses and wants to move my database to a new instance (leaving the other 9 on another). This decision is being driven by one of our IT staff, not a DBA. I am a project manager, not a DBA but I know enough to not necesarrily have a good feeling about this decision and I am urging our IT department to make a sound decision based on what is best for the database. Our IT department has stated that it is not good to have all our eggs in one basket, and has also stated that my database contains "regulatory data" so it should be on its own instance. A couple of truths: - None of the databases on the current instance are OLTP databases nor are any of them data warehouses - My database currently has joins/views to a couple of the other databases in the production environment So my questions are as follows: Am I wrong to disregard a statement about eggs in baskets? (hello, this is why we have maintenance plans/disaster recovery plans). I'll mention that other databases also have regulatory data too. What types of questions do I need to ask to determine if this is a sound decicion? (A DBA friend mentioned that if the service level agreement of said database does not radically differ from the others then why do they want to do this?) I have done some research on linked servers. What arguments should I bring forth about the fact that I have views setup that rely on data from other DBs currently?

    Read the article

< Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >