Search Results

Search found 16914 results on 677 pages for 'single threaded'.

Page 606/677 | < Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >

  • Some Memory Slots Not Working on MSI FM2-A85XA-G65 Motherboard

    - by Mike Ciaraldi
    Short version of question: Does anyone have an MSI FM2-A85XA-G65 motherboard, who can confirm that all four memory slots work? Long version: Several months ago I bought an MSI FM2-A85XA-G65 motherboard at Newegg. At that point I installed an AMD A8-5500 processor and two sticks of Corsair Vengeance 8 GB DDR3-1866 memory, and put it into my file server. I installed the RAM in slots 1 and 3, as directed in the manual, to enable dual-channel memory access. It seemed to work fine, so I bought a second identical mobo (which arrived dead, but was quickly replaced by Newegg) and set of RAM, installed an A10-5800K, and put that into my production Linux machine. Again, it seemed to work well. Eventually I happened to notice that on the server only 8 GB of RAM appeared in the BIOS. I tried each of the slots and memory modules individually and in various combinations. I even swapped processors with the production machine. The result was that putting memory in slots 1 and 2 worked (showing a total of 16 GB), but any memory in slots 3 or 4 was not recognized. However, all four memory slots in the production machine worked, and I confirmed this with both processors. I contacted MSI and arranged to ship the defective mobo back to them for replacement under warranty. I did not want my file server to be down in the interim, and I had another machine I wanted to upgrade, so I bought a third identical mobo to use. That one had the same problem -- only memory slots 1 and 2 worked. I tested it thoroughly with multiple processors and memory sticks. I sent the defective mobo back to MSI and they sent me a new one. This has the same memory slot problem. So I sent it back. The replacement arrived the other day and shows the same problem. I contacted MSI yet again and they said that nobody else has reported memory slot problems on this board and it must be my processor. So my score so far is, out of six boards of this model, I have: One where all four slots work. One which was dead on arrival. Four where only memory slots 1 and 2 work. Before I tear my other machines apart and start swapping processors again I thought I would ask if anyone else has this exact model motherboard and could confirm that all four memory slots either do or do not work. According to MSI you should be able to just plug a single memory module into any of the slots and it will work (and it does on the one mobo I have which works correctly). If you have not yet used all four slots, this is a good time to test them so you know if you can expand your memory in the future. Thanks in advance to anyone who can help.

    Read the article

  • Model M Keyboard inputs incorrect characters after logging in to Fedora

    - by mickburkejnr
    I recently bought a 24 year old IBM Model M keyboard. From what I gather, it'd been left on a shelf for the last 5 years, so you can imagine the amount of dust dirt and crap that was on it. Before cleaning it, I plugged it in to my laptop (running Fedora 17) using a PS/2 to USB adapter. What I found was, while it still works, the keys I press don't correspond to what is displayed on the screen. So for example, when I type S on the keyboard, I get ß display on the screen instead. At the time, I put this down to the adapter not working properly. Since then, I stripped the keys off the keyboard and cleaned the whole thing. It looks like it's just come out of a box! I then plugged it in to my computer (also running Fedora 17) via a standard PS/2 plug. The computer loaded up to the login screen, and I typed in my password. Pressed enter, and I logged straight in to my machine. At this point, I opened up a text editor and started typing some stuff. To my horror, the keystrokes I was entering weren't coming up as intended. What came up instead were characters that would map to the pressed key but only under a different keyboard language setting. I opened up a program to see what keyboard language had been selected, and the correct one for the keyboard was selected (which is UK in my case). I opened up a window that would show what characters mapped to what keys, and I pressed every single key on the keyboard, and every corresponding block representing each key lit up. I went back to the text editor to try again, but I was still getting these random characters. Whats more is that the backspace key would not work, although in the other utility it would flash when pressed. What I know is that at the login screen the keyboard must have entered the correct characters, otherwise I wouldn't have been able to log in. Further more, keys that don't respond while using a text editor as sending signals to the computer, as illustrated in that keyboard utility. The question is why random characters are displayed when they really shouldn't be? Would this be a hardware fault or a software issue?

    Read the article

  • How to tune system settings for mongoDB on Linux?

    - by jsh
    Trying to squeeze a lot out of one question here -- please bear with me. Although the MongoDB man pages make several useful recommendations about system settings like ulimit (http://docs.mongodb.org/manual/reference/ulimit/), and other production factors (http://docs.mongodb.org/manual/administration/production-notes/) they seem mysteriously silent on things like virtual memory and swap settings. The closest we get to a hint is that "...the operating system’s virtual memory subsystem manages MongoDB’s memory..." (http://docs.mongodb.org/manual/faq/fundamentals/#does-mongodb-require-a-lot-of-ram). Running the same job - high writes and high reads on about 10,000,000 records in a single collection -- on my 4-processor, 4GB RAM macbook and an 8-core ubuntu box with 64GB RAM I saw dramatically WORSE read performance on the linux box with factory settings, and could hear the disk constantly spinning, indicating high I/O and presumably swapping. Yes, other things were happening on the box, but there was plenty of free RAM, disk space, etc.; furthermore, I did not see evidence that Mongo was expanding to take advantage of all that free RAM as it is touted to do. Linux box default settings were as follows: vm.swappiness =60 vm.dirty_background_ratio = 10 vm.dirty_ratio = 20 vm.dirty_expire_centisecs =3000 vm.dirty_writeback_centisecs=500 I hazarded some guesses looking at docs and blogs for other types of databases (Oracle, MYSQL, etc.), experimented, and adjusted as below. vm.swappiness=10 vm.dirty_background_ratio=5 vm.dirty_ratio=5 vm.dirty_writeback_centisecs=250 vm.dirty_expire_centisecs=500 I saw some immediate apparent improvements in read time. However, when I ran my test jobs again, read performance continued to be painfully sluggish during heavy writes. Then, I REBUILT the collection from an available data source - and suddenly I can read at 1ms or less per record WHILE doing the write job! So the question is really two-fold: 1) What are appropriate VM settings for MongoDB on Linux? 2) (bonus) Does Mongo do some checking or optimization with the OS while data is being built? In other words, if I have built a large data set with suboptimal VM or I/O settings, does Mongo make assumptions during the memory-mapping process that will fail to take advantage of optimizations down the road? Obviously I don't fully grok memory mapping under the hood (I was hoping I wouldn't have to). Any help appreciated...thanks! -j

    Read the article

  • Lots of dropped packages when tcpdumping on busy interface

    - by Frands Hansen
    My challenge I need to do tcpdumping of a lot of data - actually from 2 interfaces left in promiscuous mode that are able to see a lot of traffic. To sum it up Log all traffic in promiscuous mode from 2 interfaces Those interfaces are not assigned an IP address pcap files must be rotated per ~1G When 10 TB of files are stored, start truncating the oldest What I currently do Right now I use tcpdump like this: tcpdump -n -C 1000 -z /data/compress.sh -i any -w /data/livedump/capture.pcap $FILTER The $FILTER contains src/dst filters so that I can use -i any. The reason for this is, that I have two interfaces and I would like to run the dump in a single thread rather than two. compress.sh takes care of assigning tar to another CPU core, compress the data, give it a reasonable filename and move it to an archive location. I cannot specify two interfaces, thus I have chosen to use filters and dump from any interface. Right now, I do not do any housekeeping, but I plan on monitoring disk and when I have 100G left I will start wiping the oldest files - this should be fine. And now; my problem I see dropped packets. This is from a dump that has been running for a few hours and collected roughly 250 gigs of pcap files: 430083369 packets captured 430115470 packets received by filter 32057 packets dropped by kernel <-- This is my concern How can I avoid so many packets being dropped? These things I did already try or look at Changed the value of /proc/sys/net/core/rmem_max and /proc/sys/net/core/rmem_default which did indeed help - actually it took care of just around half of the dropped packets. I have also looked at gulp - the problem with gulp is, that it does not support multiple interfaces in one process and it gets angry if the interface does not have an IP address. Unfortunately, that is a deal breaker in my case. Next problem is, that when the traffic flows though a pipe, I cannot get the automatic rotation going. Getting one huge 10 TB file is not very efficient and I don't have a machine with 10TB+ RAM that I can run wireshark on, so that's out. Do you have any suggestions? Maybe even a better way of doing my traffic dump altogether.

    Read the article

  • Conflicting ip routes with local table on attaching a virtual network interface

    - by user1071840
    I have an EC2 instance with these ip rules: $ sudo ip rule show 0: from all lookup local 32766: from all lookup main 32767: from all lookup default I can attach an elastic network interface to it with a private IP. Say the IP of my machine is 10.1.3.12 and the IP of the interface is 10.1.1.190. As soon as I attach the interface to my machine a new entry is added to the routing policy and local routing table: sudo ip rule show 0: from all lookup local 32765: from 10.1.1.190 lookup 10003 32766: from all lookup main 32767: from all lookup default $ sudo ip route show table local broadcast 10.1.1.0 dev eth3 proto kernel scope link src 10.1.1.190 local 10.1.1.190 dev eth3 proto kernel scope host src 10.1.1.190 broadcast 10.1.1.255 dev eth3 proto kernel scope link src 10.1.1.190 broadcast 10.1.3.0 dev eth0 proto kernel scope link src 10.1.3.12 local 10.1.3.12 dev eth0 proto kernel scope host src 10.1.3.12 broadcast 10.1.3.255 dev eth0 proto kernel scope link src 10.1.3.12 broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 I can send traffic to this ENI directly from a host that can have the same IP as the host the ENI is attached to. This is where the problem starts. I ran tcpdump on the port in question and saw multiple SYNs going to the ENI with src '10.1.3.12' and destination '10.1.1.190' but didn't see even a single ACK. In my understanding if ACKs were being sent from the ENI they'd have destination as 10.1.3.12 i.e. the same as the local machine's IP and such packets will now be routed as local packets matching local routing policy: local 10.1.3.12 dev eth0 proto kernel scope host src 10.1.3.12 I'd like to send all the packets originating from 10.1.1.190 (my ENI) to go back on the same interface i.e. eth3 in this case. Contents of the nee table 10003 are: $ sudo ip route show table 10003 default via 10.1.1.1 dev eth3 I think I can do the following: I don't know if its possible but probably decrease the priority of local table so the packets match the table 10003. Use iptables to mangle these packets and update the local table route to include the mark information But I'm not sure if these are the right approaches.

    Read the article

  • MongoDB and datasets that don't fit in RAM no matter how hard you shove

    - by sysadmin1138
    This is very system dependent, but chances are near certain we'll scale past some arbitrary cliff and get into Real Trouble. I'm curious what kind of rules-of-thumb exist for a good RAM to Disk-space ratio. We're planning our next round of systems, and need to make some choices regarding RAM, SSDs, and how much of each the new nodes will get. But now for some performance details! During normal workflow of a single project-run, MongoDB is hit with a very high percentage of writes (70-80%). Once the second stage of the processing pipeline hits, it's extremely high read as it needs to deduplicate records identified in the first half of processing. This is the workflow for which "keep your working set in RAM" is made for, and we're designing around that assumption. The entire dataset is continually hit with random queries from end-user derived sources; though the frequency is irregular, the size is usually pretty small (groups of 10 documents). Since this is user-facing, the replies need to be under the "bored-now" threshold of 3 seconds. This access pattern is much less likely to be in cache, so will be very likely to incur disk hits. A secondary processing workflow is high read of previous processing runs that may be days, weeks, or even months old, and is run infrequently but still needs to be zippy. Up to 100% of the documents in the previous processing run will be accessed. No amount of cache-warming can help with this, I suspect. Finished document sizes vary widely, but the median size is about 8K. The high-read portion of the normal project processing strongly suggests the use of Replicas to help distribute the Read traffic. I have read elsewhere that a 1:10 RAM-GB to HD-GB is a good rule-of-thumb for slow disks, As we are seriously considering using much faster SSDs, I'd like to know if there is a similar rule of thumb for fast disks. I know we're using Mongo in a way where cache-everything really isn't going to fly, which is why I'm looking at ways to engineer a system that can survive such usage. The entire dataset will likely be most of a TB within half a year and keep growing.

    Read the article

  • Excel 2010: dynamic update of drop down list based upon datasource validation worksheet changes

    - by hornetbzz
    I have one worksheet for setting up the data sources of multiple data validation lists. in other words, I'm using this worksheet to provide drop down lists to multiple other worksheets. I need to dynamically update all worksheets upon any of a single or several changes on the data source worksheet. I may understand this should come with event macro over the entire workbook. My question is how to achieve this keeping the "OFFSET" formula across the whole workbook ? Thx To support my question, I put the piece of code that I'm trying to get it working : Provided the following informations : I'm using such a formula for a pseudo dynamic update of the drop down lists, for example : =OFFSET(MyDataSourceSheet!$O$2;0;0;COUNTA(MyDataSourceSheet!O:O)-1) I looked into the pearson book event chapter but I'm too noob for this. I understand this macro and implemented it successfully as a test with the drop down list on the same worksheet as the data source. My point is that I don't know how to deploy this over a complete workbook. Macro related to the datasource worksheet : Option Explicit Private Sub Worksheet_Change(ByVal Target As Range) ' Macro to update all worksheets with drop down list referenced upon ' this data source worksheet, base on ref names Dim cell As Range Dim isect As Range Dim vOldValue As Variant, vNewValue As Variant Dim dvLists(1 To 6) As String 'data validation area Dim OneValidationListName As Variant dvLists(1) = "mylist1" dvLists(2) = "mylist2" dvLists(3) = "mylist3" dvLists(4) = "mylist4" dvLists(5) = "mylist5" dvLists(6) = "mylist6" On Error GoTo errorHandler For Each OneValidationListName In dvLists 'Set isect = Application.Intersect(Target, ThisWorkbook.Names("STEP").RefersToRange) Set isect = Application.Intersect(Target, ThisWorkbook.Names(OneValidationListName).RefersToRange) ' If a change occured in the source data sheet If Not isect Is Nothing Then ' Prevent infinite loops Application.EnableEvents = False ' Get previous value of this cell With Target vNewValue = .Value Application.Undo vOldValue = .Value .Value = vNewValue End With ' LOCAL dropdown lists : For every cell with validation For Each cell In Me.UsedRange.SpecialCells(xlCellTypeAllValidation) With cell ' If it has list validation AND the validation formula matches AND the value is the old value If .Validation.Type = 3 And .Validation.Formula1 = "=" & OneValidationListName And .Value = vOldValue Then ' Debug ' MsgBox "Address: " & Target.Address ' Change the cell value cell.Value = vNewValue End If End With Next cell ' Call to other worksheets update macros Call Sheets(5).UpdateDropDownList(vOldValue, vNewValue) ' GoTo NowGetOut Application.EnableEvents = True End If Next OneValidationListName NowGetOut: Application.EnableEvents = True Exit Sub errorHandler: MsgBox "Err " & Err.Number & " : " & Err.Description Resume NowGetOut End Sub Macro UpdateDropDownList related to the destination worksheet : Sub UpdateDropDownList(Optional vOldValue As Variant, Optional vNewValue As Variant) ' Debug MsgBox "Received info for update : " & vNewValue ' For every cell with validation For Each cell In Me.UsedRange.SpecialCells(xlCellTypeAllValidation) With cell ' If it has list validation AND the validation formula matches AND the value is the old value ' If .Validation.Type = 3 And .Value = vOldValue Then If .Validation.Type = 3 And .Value = vOldValue Then ' Change the cell value cell.Value = vNewValue End If End With Next cell End Sub

    Read the article

  • What is the probable failure - no BSOD, no event log, monitors sleeping, force reboot required

    - by Tyler
    Every 3 to 15 days, my PC freezes. This typically happens when the computer is idle, I'm coming home from work, back from vacation, etc. It's never happened while using my computer. The monitors are in power save mode The Caps Lock light on the (wireless) keyboard doesn't work Ctrl-alt-del has no effect, mouse (wireless) has no effect The hardware reset button and single press of power putton have no effect Computer does not appear on the network No BSOD, no memory dump Event logs have no errors or indications of problems near the time of crash. Only messages after reboot indicating that there was a reboot without a clean shutdown. Windows is set to never put the computer to sleep (just the display) Here are the vital stats of the build: OS Windows 8 Pro 64-bit CPU Intel i5-2400 Mobo Intel BOXDP67DE Micro ATX GPU MSI N460GTX Cyclone768D5/OC RAM CORSAIR XMS3 8GB (2 x 4GB) CMX8GX3M2A1333C9 PSU SeaSonic X Series X650 Gold System Drive Samsung 840 Pro 256 GB SSD Data Drive 2 x Western Digital WD20EARS 2TB in hardware RAID 1 Optical Lite-On DVD burner IHAS424-98 And here is the story of how the problem developed and what I've done to diagnose: January 2011, system built with Windows 7 64-bit, runs great. March 2011, Intel replaced the mobo because of the bad sata controllers. October 2012, upgrade to Windows 8 (problems start shortly after). January 2013, system freezes and causes network to fail for the whole house. Unplug the network cable and other devices and PCs can use the internet. Plug it back in, internet goes away for everyone. Reboot and everything is fine. March 2013, install Intel Gigabit CT PCI-E NIC, disable mobo nic in bios. Network strangeness goes away. Freezes are less frequent. Memtest shows no problems (20 passes). Early June 2013, replace Antec PSU with SeaSonic PSU. Mid June 2013, replace OCZ Vertex 2 SSD with Samsung SSD. Late June 2013, get frustrated and hope the community has some good ideas (I'm running out of budget to replace parts). My next plan of attack is setting "Turn off display" to Never and using a screen saver to see how that reacts on the next freeze. It makes me sad to waste power for up to 15 days though. Has anyone out there seen a problem like this? Any ideas on what kind of malfunction would act this way? Ideas of other diagnostic steps to take?

    Read the article

  • How should we serve files in a small bioinformatics cluster?

    - by cespinoza
    We have a small cluster of six ubuntu servers. We run bioinformatics analyses on these clusters. Each analysis takes about 24 hours to complete, each core i7 server can handle 2 at a time, takes as input about 5GB data and outputs about 10-25GB of data. We run dozens of these a week. The software is a hodgepodge of custom perl scripts and 3rd party sequence alignment software written in C/C++. Currently, files are served from two of the compute nodes (yes, we're using compute nodes as file servers)-- each node has 5 1TB sata drives mounted separately (no raid) and is pooled via glusterfs 2.0.1. They each have as 3 bonded intel ethernet pci gigabit ethernet cards, attached to a d-link DGS-1224T switch ($300 24 port consumer-level). We are not currently using jumbo frames (not sure why, actually). The two file-serving compute nodes are then mirrored via glusterfs. Each of the four other nodes mounts the files via glusterfs. The files are all large (4gb+), and are stored as bare files (no database/etc) if that matters. As you can imagine, this is a bit of a mess that grew organically without forethought and we want to improve it now that we're running out of space. Our analyses are I/O intensive and it is a bottle neck-- we're only getting 140mB/sec between the two fileservers, maybe 50mb/sec from the clients (which only have single NICs). We have a flexible budget which I can probably get up $5k or so. How should we spend our budget? We need at least 10TB of storage fast enough to serve all nodes. How fast/big does the cpu/memory of such a file server have to be? Should we use NFS, ATA over Ethernet, iSCSI, Glusterfs, or something else? Should we buy two or more servers and create some sort of storage cluster, or is 1 server enough for such a small number of nodes? Should we invest in faster NICs (say, PCI-express cards with multiple connectors)? The switch? Should we use raid, if so, hardware or software? and which raid (5, 6, 10, etc)? Any ideas appreciated. We're biologists, not IT gurus.

    Read the article

  • Resetting root password on Fedora Core 3 - serial cable access only

    - by Sensible Eddie
    A little background: We have an old rackmount server running a customised version of Fedora, manufactured by a company called Navaho. The server is a TeamCAT, running some proprietary rubbish called Freedom2. We have to keep it going - the alternative is extraordinarily expensive, and the business is not likely to be running much longer to justify changing things. Through one means or another, it has fallen upon me to try and resolve our lack of root access. The previous admin has fallen under the proverbial bus, and nobody has any clue. We have no access to the root account for this server. ssh is running on the server, and there is one account admin that we can login with, however it has no permission to do anything (ironic...) The only other way into the server is with a null-modem serial cable. This works... up to a point. I can see the BIOS, I can see the post BIOS screen, and then I see "Starting grub", followed by another screen with about four lines of Linux information, but then it stops at that point. The server continues booting, and all services come online after around two minutes, but the serial terminal displays no more information. I understand it is possible to put Linux into "single user mode" to reset a root password, but I have no idea how to do this beyond trying to interrupt it at the grub stage listed above. When I have tried it just froze. It was almost like grub had appeared (since the server did not continue booting) but I couldn't see it on the serial terminal. Which made me think maybe the grub screen has some different serial settings? I don't know... it's the first time I've ever used serial for access! A friend of mine suggested trying to use a Fedora boot CD. We could boot from USB, so something along this approach is possible but again we still can only see what's going on with the serial terminal, so it might not be achievable. Does anyone have any suggestions for things I can try? I appreciate this is a bit of a long shot, but any assistance would be invaluable. *UPDATE 1 - 28/8/12 * - we will be making some attempts on this today and will post further details later!

    Read the article

  • How to auto advance a PowerPoint slide after an exit animation is over?

    - by joooc
    PowerPoint entrance animation set up with "Start: With Previous" starts right when a new slide is advanced. However, if you set up an exit animation in the same way, it doesn't start with a slide ending sequence. Instead, the "Start: On Click" trigger needs to be used and after your exit animation is over you still need one extra click just to advance to the next slide. Workarounds to this are obvious: create a duplicate slide, make your ending animations from the original slide being your starting animations on the duplicate slide and let them be followed with whatever you want or create a transition slide with those ending animations only and set up "Change Advance slide - Automatically after - [the time it takes your animations to finish]". These workarounds will make it work for your audience, visually. However, it has an impact on slide numbers you might need to adjust accordingly and/or duplicate content changes. If you are the only one creating and using your presentation, this might be just fine. But if you are creating a presentation in collaborative mode with three other people and don't even know who will be the presenter at the end, you can mess things up. Let's be specific: most of my slides have 0.2s fly in entrance animation applied to blocks of content coming from right, bottom or left. Advancing to the next slide I want them to fly out in another 0.2s exit animation being followed by new slide 0.2s fly in entrance animation of the new blocks. The swapping of the blocks should be triggered while advancing to the next slide, as usually. As mentioned, I'm not able to achieve this without one extra click between the slides. I wrote a VBA script that should start together with an exit animation and will auto advance a slide after 0.3s when the exit animation is over. That way I should get rid of those extra clicks which are needed right now. Sub nextslide() iTime = 0.3 Start = Timer While Timer < Start + iTime DoEvents Wend With SlideShowWindows(1).View .GotoSlide (ActivePresentation.SlideShowWindow.View.Slide.SlideIndex + 1) End With End Sub It works well when binded on a box, button or another object. But I can't make it run on a single click (anywhere on the slide) so that it could start together with the exit animation onclick trigger. Creating a big transparent rectangular shape over the whole slide and binding the macro on it doesn't help either. By clicking it you only get the macro running, exit animation is not triggered. Anyway, I don't want to bind the macro to any other workaround object but the slide itself. Anyone knows how to trigger a PowerPoint VBA script on slide onclick event? Anyone knows a secret setting that will make the exit animation work as expected i.e. animating right before exiting a slide while transitioning to the next one? Anyone knows how to beat this dragon? Thank you!

    Read the article

  • IE9 Error: There was a pr?blem sending the command to the program

    - by HK1
    I'm working on a new/fresh Windows 7 32bit machine that now has IE9 installed. The user is using the Dell Stardock application as his primary "desktop" (all his links there). When we place an internet link there and click on it we get the following error message: There was a problem sending the command to the program. To me this indicates that IE9 is having trouble going to the website we want to go to, which should get passed as a parameter to the browser when it opens. I don't think this is a StarDock/ObjectDock problem because we also have some other problems with internet links. For example, we cannot move an internet link from the Desktop to the Quick Launch on the task bar. When we do try, it forces us to put the link with the IE icon as part of the IE menu instead of allowing us to have a shortcut there as it's own entry. I should mention however, that links on the desktop and in the taskbar do work as we expect them too (without showing the above error message). It appears that this problem started after installing Windows Updates. Since we installed a whole bunch of updates at once I have no idea which one caused the problem. I did have Google Chrome installed but I uninstalled it since the user wants to use IE. The problem started before I uninstalled Chrome. I also reset the browser settings on IE9. It didn't help. Next I uninstalled IE9 which took me back to IE8. This actually did resolve the problem but the problem came back as soon as I installed IE9 again. We have Verizon Internet Security installed. It's actually a McAfee product rebranded to look like Verizon. I'm not real crazy over this software but the customer has a subscription so we're not planning to change it. I have no reason to believe that this is causing the problem and yet I know that security software is often to blame for strange issues. I've looked at the registry settings for the following keys and everything appears to be ok for every single one of them: HKEY_CLASSES_ROOT\.htm HKEY_CLASSES_ROOT\.html HKEY_CLASSES_ROOT\http\shell\open\command HKEY_CLASSES_ROOT\http\shell\open\ddeexec\Application HKEY_CLASSES_ROOT\https\shell\open\command HKEY_CLASSES_ROOT\https\shell\open\ddeexec\Application HKEY_CLASSES_ROOT\htmlfile\shell\open\command HKEY_CLASSES_ROOT\Microsoft.Website\Shell\Open\Command Edit1: I've found two potential solutions but I won't be able to try them until tomorrow. One is to disable the "Windows Font Cache" service. Another is to clear IE cache and browsing history. I won't be able to try out either solution until tomorrow since this is a remote client's machine. I see there are lots of other suggestions online but if you take the time to read them through you'll see that the other suggestions didn't fix the problem.

    Read the article

  • Hardware for multipurpose home server

    - by Michael Dmitry Azarkevich
    Hi guys, I'm looking to set up a multipurpose home server and hoped you could help me with the hardware selection. First of all, the services it will provide: Hosting a MySQL database (for training and testing purposes) FTP server Personal Mail Server Home media server So with this in mind I've done some research, and found some viable solutions: A standard PC with the appropriate software (Either second hand or new) A non-solid state mini-ITX system A solid state, fanless mini-ITX system I've also noted the pros and cons of each system: A standard second hand PC with old hardware would be the cheapest option. It could also have lacking processing power, not enough RAM and generally faulty hardware. Also, huge power consumption heat generation and noise levels. A standard new PC would have top-notch hardware and will stay that way for quite some time, so it's a good investment. But again, the main problem is power consumption, heat generation and noise levels. A non-solid state mini-ITX system would have the advantages of lower power consumption, lower cost (as far as I can see) and long lasting hardware. But it will generate noise and heat which will be even worse because of the size. A solid state, fanless mini-ITX system would have all the advantages of a non-solid state mini-ITX but with minimal noise and heat. The main disadvantage is the read\write problems of flash memory. All in all I'm leaning towards a non-solid state mini-ITX because of the read\write issues of flash memory. So, after this overview of what I do know, my questions are: Are all these services even providable from a single server? To my best understanding they are, but then again, I might be wrong. Is any of these solutions viable? If yes, which one is the best for my purposes? If not, what would you suggest? Also, on a more software oriented note: OS wise, I'm planning to run Linux. I'm currently thinking of four options I've been recommended: CentOS, Gentoo, DSL (Damn Small Linux) and LFS (Linux From Scratch). Any thoughts on this? Any other distro you would recomend? Regarding FTP services, I've herd good things about FileZila. Anyone has any experience with that? Do you recommend it? Do you recommend something else? Regarding the Mail service, I know nothing about this except that it exists. Any software you recommend for this task? Home media, same as mail service. Any recommended software? Thank you very much.

    Read the article

  • How to prevent dual booted OSes from damaging each other?

    - by user1252434
    For better compatibility and performance in games I'm thinking about installing Windows additionally to Linux. I have security concerns about this, though. Note: "Windows" in the remaining text includes not only the OS but also any software running on it. Regardless of whether it comes included or is additionally installed, whether it is started intentionally or unintentionally (virus, malware). Is there an easy way to achieve the following requirements: Windows MUST NOT be able to kill my linux partition or my data disk neither single files (virus infection) nor overwriting the whole disk Windows MUST NOT be able to read data disk (- extra protection against spyware) Linux may or may not have access to the windows partition both Linux and Windows should have full access to the graphics card this rules out desktop VM solutions for gaming I want the manufacturer's windows graphics card driver Regarding Windows to be unable to destroy my linux install: this is not just the usual paranoia, that has happened to me in the past. So I don't accept "no ext4 driver" as an argument. Once bitten, twice shy. And even if destruction targeted at specific (linux) files is nearly impossible, there should be no way to shred the whole partition. I may accept the risk of malware breaking out of a barrier (e.g. VM) around the whole windows box, though. Currently I have a system disk (SSD) and a data disk (HDD), both SATA. I expect I have to add another disk. If i don't: even better. My CPU is a Intel Core i5, with VT-x and VT-d available, though untested. Ideas I've had so far: deactivate or hide other HDs until reboot at low level possible? can the boot loader (grub) do this for me? tiny VM layer: load windows in a VM that provides access to almost all hardware, except the HDs any ready made software solution for this? Preferably free. as I said: the main problem seems to be to provide full access to the graphics card hardware switch to cut power to disks commercial products expensive and lots of warnings against cheap home built solutions preferably all three hard disks with one switch (one push) mobile racks - won't wear of daily swapping be a problem?

    Read the article

  • Tips for maximizing Nginx requests/sec?

    - by linkedlinked
    I'm building an analytics package, and project requirements state that I need to support 1 billion hits per day. Yep, "billion". In other words, no less than 12,000 hits per second sustained, and preferably some room to burst. I know I'll need multiple servers for this, but I'm trying to get maximum performance out of each node before "throwing more hardware at it". Right now, I have the hits-tracking portion completed, and well optimized. I pretty much just save the requests straight into Redis (for later processing with Hadoop). The application is Python/Django with a gunicorn for the gateway. My 2GB Ubuntu 10.04 Rackspace server (not a production machine) can serve about 1200 static files per second (benchmarked using Apache AB against a single static asset). To compare, if I swap out the static file link with my tracking link, I still get about 600 requests per second -- I think this means my tracker is well optimized, because it's only a factor of 2 slower than serving static assets. However, when I benchmark with millions of hits, I notice a few things -- No disk usage -- this is expected, because I've turned off all Nginx logs, and my custom code doesn't do anything but save the request details into Redis. Non-constant memory usage -- Presumably due to Redis' memory managing, my memory usage will gradually climb up and then drop back down, but it's never once been my bottleneck. System load hovers around 2-4, the system is still responsive during even my heaviest benchmarks, and I can still manually view http://mysite.com/tracking/pixel with little visible delay while my (other) server performs 600 requests per second. If I run a short test, say 50,000 hits (takes about 2m), I get a steady, reliable 600 requests per second. If I run a longer test (tried up to 3.5m so far), my r/s degrades to about 250. My questions -- a. Does it look like I'm maxing out this server yet? Is 1,200/s static files nginx performance comparable to what others have experienced? b. Are there common nginx tunings for such high-volume applications? I have worker threads set to 64, and gunicorn worker threads set to 8, but tweaking these values doesn't seem to help or harm me much. c. Are there any linux-level settings that could be limiting my incoming connections? d. What could cause my performance to degrade to 250r/s on long-running tests? Again, the memory is not maxing out during these tests, and HDD use is nil. Thanks in advance, all :)

    Read the article

  • Successfully concatenating multiple videos

    - by wiseguydigital
    My mission is to create videos out of old web slideshows. To start with I have jpegs and audio files that worked as Flash slideshows in an old system, structured such as this: Audio structure my_audio_1.mp3 (this file is a 3 second mp3 of silence) my_audio_2.mp3 my_audio_3.mp3 my_audio_4 etc... roughly 30 mp3s per slideshow Image structure my_image_1.jpg (this acts as the opening slide) my_image_2.jpg my_image_3.jpg my_image_4. etc... roughly 30 images per slideshow. As there are almost 100 slideshows that must be converted to video, I have created a web-based interface using PHP to automate the process, that sits on a local system and attempts to combine the files using shell_exec(). The process uses the following workflow: Loop through each slide and make an avi or mpeg. So for instance my_mini_video_2.avi would be a video that consists of my_image_2.jpg and has a soundtrack of my_audio_2.mp3. This slide would last the length of my_audio_2.mp3. Join / stitch / concat all of the mini videos to create the final video (Using a combination of cat and either mencoder or ffmpeg (I have also tried avimerge but to no avail). Transcode the new 'master' video to various formats such as flv etc. I thought this would be simple and have been close on many occasions but it still won't work. I can't get past stage 2 as I can't get a perfect 'master' video. I have now experimented with Mencoder, FFMpeg and seem to have been through every combination I can think of. The problem is that the audio and visuals never sync, no matter what I try. Also, I have even tried created audio-less mini videos, joining the MP3s into one long MP3 using both cat and mp3wrap and then assigning the new long MP3 as the audio track, but this always produces either a very short file or a badly slowed down file and makes the female voiceover sound like a male boxer!!! There appears to be no problems at all with the original files. Does anybody have any experience in producing a video successfully from the same kind of starting point? Or any ideas on what I may be doing wrong? As an example: If I create silent mini-videos, and stitch them together into 'temp-master.mpg' and then join the MP3s together into single MP3 called 'temp-master-audio.mp3', the audio file's duration is 09:10 and the video file's duration is 08:35. They should be the same and the audio will seem sloooow. I haven't posted code as I have written lots and lots of combinations.

    Read the article

  • Cloud storage provider lost my data. How to back up next time?

    - by tomcam
    What do you do when cloud storage fails you? First, some background. A popular cloud storage provider (rhymes with Booger Link) damaged a bunch of my data. Getting it back was an uphill battle with all the usual accusations that it was my fault, etc. Finally I got the data back. Yes, I can back this up with evidence. Idiotically, I stayed with them, so I totally get that the rest of this is on me. The problem had been with a shared folder that works with all 12 computers my business and family use with the service. We'll call that folder the Tragic Briefcase. It is a sort of global folder that's publicly visible to all computers on the service. It's our main repository. Today I decided to deal with some residual effects of the Crash of '11. Part of the damage they did was that in just one of my computers (my primary, of course) all the documents in the Tragic Briefcase were duplicated in the Windows My Documents folder. I finally started deleting them. But guess what. Though they appeared to be duplicated in the file system, removing them from My Documents on the primary PC caused them to disappear from the Tragic Briefcase too. They efficiently disappeared from all the other computers' Tragic Briefcases as well. So now, 21 gigs of files are gone, and of course I don't know which ones. I want to avoid this in the future. Apart from using a different storage provider, the bigger picture is this: how do I back up my cloud data? A complete backup every week or so from web to local storage would cause me to exceed my ISP's bandwidth. Do I need to back up each of my 12 PCs locally? I do use Backupify for my primary Google Docs, but I have been storing taxes, confidential documents, Photoshop source, video source files, and so on using the web service. So it's a lot of data, but I need to keep it safe. Backup locally would also mean 2 backup drives or some kind of RAID per PC, right, because you can't trust a single point of failure? Assuming I move to DropBox or something of its ilk, what is the best way to make sure that if the next cloud storage provider messes up I can restore?

    Read the article

  • How to move partition with Users to a new SATA controller?

    - by Al Kepp
    Our current situation: X79 chipset. Windows 7 installed on SATA SSD running on Marvell controller. C:\Users contains just Public and computer-name folders, all the other users are created in E:\Users. That partition is on two HDD's running on an Intel RAID controller. (The motherboard has got two SATA/RAID onboard controllers.) The goal is: Without reinstallation, I want to move SSD to a standard SATA controller. Then I want to get rid of RAID1, and move one HDD to the same internal Intel SATA controller and the current E:\Users move to that new place. I want it to stay as E:\users, but I need to reinstall the HDD to let it work in SATA mode without RAID. So I face several problems. I am sure all are solvable with free software utilities, but I don't know how exactly to do it. I can see the particular problems: I have got all users at E:\Users. When I turn off that E:\ disk, I won't be able to login to Windows. I need at least one admin account to be placed back at C:\users. The current C: runs on standard 120 GB SSD, but it is connected to a Marvell SATA/RAID controller. I am affraid the Windows won't let me put it to Intel SATA controller due to hardware/licence check, and I won't be able to use standard W7 recovery disk either, because there probably isn't marvel SATA/RAID driver on it. I haven't tried anything yet, because I am affraid I can end up with computer not working at all. (I want to move it to a standard ICH10 Intel SATA controller to let us have no problems in future with it. I think it is not very safe when we use any nonstandard hardware to boot the computer.) I need to somehow backup current E:\ disk and restore it to a new E:. I hope this will be the easy part (as long as the admin account will reside on C:). The E: RAID array is very large, but it is almost empty (less than 100 GB of data.) So I can make the partition smaller so it can easily fit to a single SATA HDD.

    Read the article

  • Switching efficiently between windows, not apps, in OS X

    - by Vultan
    Previous questions have asked "how can I efficiently switch between windows, not applications, in OS X"? (Switching windows on OS X, Switch between windows on Mac OS X? and others). The most recommended suggestions seem to be: Use some combo of cmd-tab and cmd-~. Use Expose, and possibly Spaces Use Witch I spent the money on Witch, and have been using it for a few weeks; it's ok, but it is sometimes slow to respond, sometimes buggy on window order, crashes my system if I disable and re-enable it too many times, and doesn't work properly with X11 apps. The built-in cmd-tab and cmd-~ are ok, but still bring an entire application to the forefront. I find a very common workflow I use is to bounce back and forth between two windows (for example, a browser window and a Thunderbird email in progress), when both apps (the browser and email software) have multiple windows open. I can use Cmd-Tab to get back and forth between apps, but whenever I switch to an app, ALL windows from that app pop up. That suddenly fills my screen with irrelevant data and windows, and often drops those other windows in front of the single window from the other app that I was using and would conveniently like to keep viewing even though it isn't in focus. Expose seems to be the preferred "OS X natural way," but I can't seem to get myself to use it efficiently. I hit F9, and see 10 windows; I then need to squint, try to find the window I want, then use the mouse or the cursor keys to navigate to the one I want. Given the number of power users who say they use Expose, I must be missing the boat here. My goal is not to make this a repeat of previous questions. I'm not asking "what are my alternatives?" (unless I've missed one above!) Rather, I'm asking: what are you, OS X power users, actually doing to handle the use case I described above? Another common use case for me is having multiple Excel spreadsheets open and multiple browser windows open, and I'm rapidly switching back and forth between one spreadsheet in particular and one browser window. Every time I Cmd-Tab, all spreadsheets or all browser windows appear: I don't want to see the ones I'm not working with, and they tend to hide the windows from the alternative app that I don't have in focus but I'd like to at least eyeball. Can you describe what your workflow is like, and how you rapidly and thoughtlessly switch between windows from apps that have multiple windows open?

    Read the article

  • bluetooth connection using pybluez

    - by srj0408
    I am working on bluetooth not exactly on bluetooth stack-development but to use bluetooth in one of my project. I had done all that before using some of the py-bluez commands like hciconfig, hcitool scan , then simple-agents and using serial module inside python. But that was quite random. We were able to connect only one specific device based on its bluetooth address and there was no facility of reconnection once the devices are disconnected. Now i want to try out this stuff in a sequential manner like this (i am doing that all on a RPI and for at present on ubuntu 12.04.) i) Store some names in a file along with some other information with respect to that device. ii) Run a script to find out the device in locality with those names and if any one if found, report that. For this step, i had taken a reference from BTBook , made available from MIT. Below is the script for the same, but that script only search for the single name. from bluetooth import * target_name = "XT1033" target_address = None nearby_devices = discover_devices() for address in nearby_devices: if target_name == lookup_name( address ): target_address = address break if target_address is not None: print "found target bluetooth device with address ", target_address connect_socket(target_address); else: print "could not find target bluetooth device nearby" iii) Connect the device using client sock. But i dont have any device on which i can write a simple python script. My client can be any device that will be publishing data. Now i came through a script in the same book, that actually connect to a client requesting permission to connect to server. from bluetooth import * port = 1 server_sock=BluetoothSocket( RFCOMM ) server_sock.bind(("",port)) server_sock.listen(1) client_sock, client_info = server_sock.accept() print "Accepted connection from ", client_info data = client_sock.recv(1024) print "received [%s]" % data client_sock.close() server_sock.close() here client_sock, client_info = server_sock.accept() provide the client address and port requested to be connected. Can i pass address obtained from the earlier script to this, so that it connect server to the client? iv) Then if client get disconnected, re-connect(a simple polling can be used.) All this stuff can be done using bash and py-bluez functions but i want to do that in a sequential manner.I am not a master in python but i can do some small stuff. Can any one guide me for the same or can direct me to more usefull resource through which i can continue my coding part after finding the "X", "Y" named devices.

    Read the article

  • SQL Express 2008 R2 on Amazon EC2 instance: tons of free memory, poor performance

    - by gravyface
    The old SQL Express 2005 was running on a low-end single Xeon CPU Dell server, RAID 5 7200 disks, 2 GB RAM (SBS 2003). I have not done any baseline measurements on the old physical server, but the Web app is used by half a dozen people (maybe 2 concurrently), so I figured "how bad can an Amazon EC2 instance be?". It's pretty horrible: a difference of 8 seconds of load time on one screen. First of all, I'm not a SQL guru, but here's what I've tried: Had a Small Instance, now running a c1.medium (High Cpu Medium) Windows 2008 32-bit R2 EBS-backed instance running IIS 7.5 and SQL Express 2008 R2. No noticeable improvement. Changed Page File from fixed 256 to Automatic. Setup a Striped Mirror from within Disk Management with two attached 1 GB EBS volumes. Moved database and transaction log, left everything else on the boot EBS volume. No noticeable change. Looked at memory, ~1000 MB of physical memory free (1.7 GB total). Changed SQL instance to use a minimum of 1024 RAM; restarted server, no change in memory usage. SQL still only using ~28MB of RAM(!). So I'm thinking: this database is tiny (28MB), why isn't the whole thing cached in RAM? Surely that would speed up performance. The transaction log is 241 MB. Seems kind of large in comparison -- has this not been committed? Is it a cause of performance degradation? I recall something about Recovery Models and log sizes somewhere in my travels, but not positive. Another thing: the old server was running SQL Express 2005. Not sure if that has any impact, but I tried changing the compatibility level from SQL 2000 to 2008, but that had no effect. Anyways, what else can I try here? Seems ridiculous to throw more virtual hardware at this thing. I know I/O is going to be rough on EBS volumes, but surely others are successfully running small .NET/SQL apps on reasonably priced instances?

    Read the article

  • Remote Debian System Preventing Logon

    - by choobablue
    I have a dozen or so single board computers on a network running Debian (squeeze) and access them via ssh (ssh server is dropbear). To give an idea of the hardware of these computers they're 1.2 GHz x86 processors, 1GB of RAM and 4GB flash drives formatted as ext2 (I avoided ext3 to prevent the added flash write stress from journaling), there is also a swap partition on the drive. Normally the setup I'm using works great and I can access all the computers. Every once in a while one will prevent access. What happens is I try to connect via ssh (putty) and it gives me the login prompt, I enter the username and password and it responds 'Access Denied' and it will also refuse any public key in ~/.ssh/authorized_keys. The credentials are correct as they worked previously. The computer responds to pings and putty recognizes the server public key, which implies to me the system is still running. Restarting the server fixes the problem and I can log in again. (I tried a temporary fix of putting shutdown -r now in the root crontab but this doesn't seem to reliably be run once the hang happens) Once I restart however there doesn't seem to be any information in any of the system logs to indicate what happened, the logs are simply empty for that time period, as if the system had crashed. There is some custom software running on the system which appears to stop working (which is why I wanted to ssh to begin with). I'm assuming that this program is the source of the problems but I'm unsure of how it would cause it and how to debug what is happening. The most likely explanation I can think of is that there is a memory leak in the other program that then prevents dropbear from spawning a new login shell (and crontab from executing shutdown) as there is not enough free memory. But looking at memory usage of the other (working) computers there doesn't seem to be any meaningful increase in memory to indicate a leak (unless it's a very big, fast acting and rare leak). I would think that when the OS ran out of memory it would restart the system or kill processes (the Linux kernel restarts right?). The other thing I wonder about is if the fact that they are running off a flash drive could have some effect, especially the swap partition (which I think I should remove to prevent wear of the flash), but the flash drives are young (~1 month) and I don't think that wear would be a factor yet. Does anybody have an idea of what could cause these symptoms, if it could be done by a memory leak, or something else I haven't thought of. And does anybody know of a method to try to debug the problem and find out more information about what's going wrong?

    Read the article

  • ERROR: Linux route add command failed: external program exited with error status: 4

    - by JohnMerlino
    A remote machine running fedora uses openvpn, and multiple developers were successfully able to connect to it via their client openvpn. However, I am running Ubuntu 12.04 and I am having trouble connecting to the server via vpn. I copied ca.crt, home.key, and home.crt from the server to my local machine to /etc/openvpn folder. My client.conf file looks like this: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. ;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. ;proto tcp proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. remote xx.xxx.xx.130 1194 ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. ;remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) ;user nobody ;group nogroup # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca ca.crt cert home.crt key home.key # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 But when I start server and look in /var/log/syslog, I notice the following error: May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 10.27.12.1 netmask 255.255.255.252 gw 10.27.12.37 May 27 22:13:51 myuser ovpn-client[5626]: ERROR: Linux route add command failed: external program exited with error status: 4 May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 172.27.12.0 netmask 255.255.255.0 gw 10.27.12.37 May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 10.27.12.1 netmask 255.255.255.255 gw 10.27.12.37 And I am unable to connect to the server via openvpn: $ ssh [email protected] ssh: connect to host xxx.xx.xx.130 port 22: No route to host What may I be doing wrong?

    Read the article

  • Looking For iPhone 4S Alternatives? Here Are 3 Smartphones You Should Consider

    - by Gopinath
    If you going to buy iPhone 4S on a two year contract in USA, Europe or Australia you may not find it expensive. But if you are planning to buy it in any other parts of the world, you will definitely feel the heat of ridiculous iPhone 4S price. In India iPhone 4S costs approximately costs $1000 which is 30% more than the price tag of an unlocked iPhone sold in USA. Personally I love iPhones as there is no match for the user experience provided by Apple as well as the wide range of really meaning applications available for iPhone. But it breaks heart to spend $1000 for a phone and I’m forced to look at alternates available in the market. Here are the four iPhone 4S alternates available in almost all the countries where we can buy iPhone 4S Google Galaxy Nexus The Galaxy Nexus is Google’s own Android smartphone manufactured by Samsung and sold under the brand name of Google Nexus. Galaxy Nexus is the pure Android phone available in the market without any bloat software or custom user interfaces like other Androids available in the market. Galaxy Nexus is also the first Android phone to be shipped with the latest version of Android OS, Ice Cream Sandwich. This phone is the benchmark for the rest of Android phones that are going to enter the market soon. In the words of Google this smartphone is called as “Galaxy Nexus: Simple. Beautiful. Beyond Smart.”.  BGR review summarizes the phone as This is almost comical at this point, but the Samsung Galaxy Nexus is my favourite Android device in the world. Easily replacing the HTC Rezound, the Motorola DROID RAZR, and Samsung Galaxy S II, the Galaxy Nexus champions in a brand new version of Android that pushes itself further than almost any other mobile OS in the industry. Samsung Galaxy S II The one single company that is able to sell more smartphones than Apple is Samsung. Samsung recently displaced Apple from the top smartphone seller spot and occupied it with loads of pride. Samsung’s Galaxy S II fits as one the best alternatives to Apple’s iPhone 4S with it’s beautiful design and remarkable performance. Engadget summarizes Samsung Galaxy S2 review as It’s the best Android smartphone yet, but more importantly, it might well be the best smartphone, period. Of course, a 4.3-inch screen size won’t suit everyone, no matter how stupendously thin the device that carries it may be, and we also can’t say for sure that the Galaxy S II would justify a long-term iOS user foresaking his investment into one ecosystem and making the leap to another. Nonetheless, if you’re asking us what smartphone to buy today, unconstrained by such externalities, the Galaxy S II would be the clear choice. Sometimes it’s just as simple as that. Nokia Lumia 800 Here comes unexpected Windows Phone in to the boxing ring. May be they are not as great as Androids available in the market today, but they are picking up very quickly. Especially the Nokia Lumia 800 seems to be first ever Windows Phone 7 aimed at competing serious with Androids and iPhones available in the market. There are reports that Nokia Lumia 800 is outselling all Androids in UK and few high profile tech blogs are calling it as the king of Windows Phone. Considering this phone while evaluating the alternative of iPhone 4S will not disappoint you. We assure. Droid RAZR Remember the Motorola Driod that swept entire Android market share couple of years ago? The first two version of Motorola Droids were the best in the market and they out performed almost every other Android phone those days. The invasion of Samsung Androids, Motorola lost it charm. With the recent release of Droid RAZR, Motorola seems to be in the right direction to reclaiming the prestige. Droid RAZR is the thinnest smartphone available in the market and it’s beauty is not just skin deep. Here is a review of the phone from Engadget blog the RAZR’s beauty is not only skin deep. The LTE radio, 1.2GHz dual-core processor and 1GB of RAM make sure this sleek number is ready to run with the big boys. It kept pace with, and in some cases clearly outclassed its high-end competition. Despite its deficiencies in the display department and underwhelming battery life, the RAZR looks to be a perfectly viable alternative when considering the similarly-pricey Rezound and Galaxy Nexus Further Reading So we have seen the four alternates of iPhone 4S available in the market and I personally love to buy a Samsung smartphone if I’m don’t have money to afford an iPhone 4S. If you are interested in deep diving into the alternates, here few links that help you do more research Apple iPhone 4S vs. Samsung Galaxy Nexus vs. Motorola Droid RAZR: How Their Specs Compare by Huffington Post Nokia Lumia 800 vs. iPhone 4S vs. Nexus Galaxy: Spec Smackdown by PC World Browser Speed Test: Nokia Lumia 800 vs. iPhone 4S vs. Samsung Galaxy S II – by Gizmodo iPhone 4S vs Samsung Galaxy S II by pocket lint Apple iPhone 4S vs. Samsung Galaxy S II by techie buzz This article titled,Looking For iPhone 4S Alternatives? Here Are 3 Smartphones You Should Consider, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • SQL SERVER – Retrieve and Explore Database Backup without Restoring Database – Idera virtual databas

    - by pinaldave
    I recently downloaded Idera’s SQL virtual database, and tested it. There are a few things about this tool which caught my attention. My Scenario It is quite common in real life that sometimes observing or retrieving older data is necessary; however, it had changed as time passed by. The full database backup was 40 GB in size, and, to restore it on our production server, it usually takes around 16 to 22 minutes, depending on the load server that is usually present. This range in time varies from one server to another as per the configuration of the computer. Some other issues we used to have are the following: When we try to restore a large 40-GB database, we needed at least that much space on our production server. Once in a while, we even had to make changes in the restored database, and use the said changed and restored database for our purpose, making it more time-consuming. My Solution I have heard a lot about the Idera’s SQL virtual database tool.. Well, right after we started to test this tool, we found out that it really delivers what it promises. Using this software was very easy and we were able to restore our database from backup in less than 2 minutes, sparing us from the usual longer time of 16–22 minutes. The needful was finished in a total of 10 minutes. Another interesting observation is that there is no need to have an additional space for restoring the database. For complete database restoration, the single additional MB on the drive is not required anymore. We can use the database in the same way as our regular database, and there is no need for any additional configuration and setup. Let us look at the most relevant points of this product based on my initial experience: Quick restoration of the database backup No additional space required for database restoration virtual database has no physical .MDF or .LDF The database which is restored is, in fact, the backup file converted in the virtual database. DDL and DML queries can be executed against this virtually restored database. Regular backup operation can be implemented against virtual database, creating a physical .bak file that can be used for future use. There was no observed degradation in performance on the original database as well the restored virtual database. Additional T-SQL queries can be let off on the virtual database. Well, this summarizes my quick review. And, as I was saying, I am very impressed with the product and I plan to explore it more. There are many features that I have noticed in this tool, which I think can be very useful if properly understood. I had taken a few screenshots using my demo database afterwards. Let us see what other things this tool can do besides the mentioned activities. I am surprised with its performance so I want to know how exactly this feature works, specifically in the matter of why it does not create any additional files and yet, it still allows update on the virtually restored database. I guess I will have to send an e-mail to the developers of Idera and try to figure this out from them. I think this tool is very useful, and it delivers a high level of performance way more than what I expected. Soon, I will write a review for additional uses of SQL virtual database.. If you are using SQL virtual database in your production environment, I am eager to learn more about it and your experience while using it. The ‘Virtual’ Part of virtual database When I set out to test this software, I thought virtual database had something to do with Hyper-V or visualization. In fact, the virtual database is a kind of database which shows up in your SQL Server Management Studio without actually restoring or even creating it. This tool creates a database in SSMS from the backup of the same database. The backup, however, works virtually the same way as original database. Potential Usage of virtual database: As soon as I described this tool to my teammate, I think his very first reaction was, “hey, if we have this then there is no need for log shipping.” I find his comment very interesting as log shipping is something where logs are moved to another server. In fact, there are no updates on the database from log; I would rather compare it with Snapshot Replication. In fact, whatever we use, snapshot replicated database can be similarly used and configured with virtual database. I totally believe that we can use it for reporting purpose. In fact, after this database was configured, I think the uses of this tool are unlimited. I will have to spend some more time studying it and will get back to you. Click on images to see larger images. virtual database Console Harddrive Space before virtual database Setup Attach Full Backup Screen Backup on Harddrive Attach Full Backup Screen with Settings virtual database Setup – less than 60 sec virtual database Setup – Online Harddrive Space after virtual database Setup Point in Time Recovery Option – Timeline View virtual database Summary No Performance Difference between Regular DB vs Virtual DB Please note that all SQL Server MVP gets free license of this software. Reference: Pinal Dave (http://blog.SQLAuthority.com), Idera (virtual database) Filed under: Database, Pinal Dave, SQL, SQL Add-On, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, SQLAuthority News, T SQL, Technology Tagged: Idera

    Read the article

< Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >