Search Results

Search found 2242 results on 90 pages for 'fuzzy comparison'.

Page 70/90 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Test your internet connection - Emtel Mobile Internet

    After yesterday's report on Emtel Fixed Broadband (I'm still wondering where the 'fixed' part is), I did the same tests on Emtel Mobile Internet. For this I'm using the Huawei E169G HSDPA USB stick, connected to the same machine. Actually, this is my fail-safe internet connection and the system automatically switches between them if a problem, let's say timeout, etc. has been detected on the main line. For better comparison I used exactly the same servers on Speedtest.net. The results Following are the results of Rose Hill (hosted by Emtel) and respectively Frankfurt, Germany (hosted by Vodafone DE): Speedtest.net result of 31.05.2013 between Flic en Flac and Rose Hill, Mauritius (Emtel - Mobile Internet) Speedtest.net result of 31.05.2013 between Flic en Flac and Frankfurt, Germany (Emtel - Mobile Internet) As you might easily see, there is a big difference in speed between national and international connections. More interestingly are the results related to the download and upload ratio. I'm not sure whether connections over Emtel Mobile Internet are asymmetric or symmetric like the Fixed Broadband. Might be interesting to find out. The first test result actually might give us a clue that the connection could be asymmetric with a ratio of 3:1 but again I'm not sure. I'll find out and post an update on this. It depends on network coverage Later today I was on tour with my tablet, a Samsung Galaxy Tab 10.1 (model GT-P7500) running on Android 4.0.4 (Ice Cream Sandwich), and did some more tests using the Speedtest.net app. The results are actually as expected and in areas with better network coverage you will get better results after all. At least, as long as you stay inside the national networks. For anything abroad, it doesn't really matter. But see for yourselves: Speedtest.net result of 31.05.2013 between Cascavelle and servers in Rose Hill, Mauritius (Emtel - Mobile Internet), Port Louis, Mauritius and Kuala Lumpur, Malaysia It's rather shocking and frustrating to see how the speed on international destinations goes down. And the full capability of the tablet's integrated modem (HSDPA: 21 Mbps; HSUPA: 5.76 Mbps) isn't used, too. I guess, this demands more tests in other areas of the island, like Ebene, Pailles or Port Louis. I'll keep you updated... The question remains: Alternatives? After the publication of the test results on Fixed Broadband I had some exchange with others on Facebook. Sadly, it seems that there are really no alternatives to what Emtel is offering at the moment. There are the various internet packages by Mauritius Telecom feat. Orange, like ADSL, MyT and Mobile Internet, and there is Bharat Telecom with their Bees offer which is currently limited to Ebene and parts of Quatre Bornes.

    Read the article

  • CMS vs Admin Panel?

    - by Bob
    Okay, so this probably seems like an unusual, more grammar related question, but I was unsure of what to call it. If you use a software such as vBulletin or MyBB or even Blogger and you're the administrator (or other, lesser position such as moderator) of the forum, or publisher/author of the blog, you generally have access to something of an "admin panel". For example, vBulletin's admin panel looks like this and Blogger's admin panel looks something like this. While they both look different and do different things, the goal is fundamentally the same: to provide the user with a method for adding, modifying, or deleting content... to let them control and administrate their forum or blog. Also, they're both made specifically by the company for use in a specific product. Now, there's also options like Drupal. It seems to offer quite a bit more and be quite a bit more generalized. How does something like this work? If you were freelancing, would you deploy a website with Drupal, or would it be something the client might already have installed on their own server? I've never really used Drupal, only heard about it, so please let me know. Also, there seems to be other options like cPanel, a sort of global CMS that allows you to administrate over your entire website. How do those work in comparison to Drupal, or the administrative panels with vBulletin? They seem to serve related, but different purposes. Basically, what is the norm? If I'm developing a web application for a group that needs to be able to edit their website without the need to go into the code or the database (or rather, wants to act in a graphical, easy-to-use content-management/admin panel), would it also be necessary to write my own miniature admin panel? Or would I be able to send them off knowing that they have cPanel? Or could something like Drupal fill this void? Again, I'm a little new to web development, and I'm working on planning out my first, real, large website. So I need a little advice on the standards and expectations for web development - security and coding practices aside, what should I be looking for as far as usability and administration for the client (who will be running the site once I'm done creating the website)? Any extra tips would also be appreciated! Oh, and just a little bit: I'm writing the website using Ruby on the Sinatra framework (both Ruby and Sinatra are things I'm fairly comfortable with) and I'm not being paid to make the website (and I will also be a user, and one of the three administrators of the website) - it's being built for a club I'm in.

    Read the article

  • Cannot get 3D OpenGL support in Vmware guests, how can I fix this?

    - by jjapol
    I have been working at this problem for 2 days now. I cannot for the life of me enable 3D support in VMWare 9 guests. My specifications are: Hardware: Dell Latitude E5520 laptop. Processor: Intel i7-2620M CPU @ 2.70GHz × 4. Memory: 8GB. Video: Intel Sandybridge Mobile x86/MMX/SSE2 OS: Ubuntu 12.04.1 LTS, 32 bit. Vmware Workstation: 9.0.1 build-894247 Glxgears functions fine. Frame rate is ~60fps. Vmware guest: Windows 7 Starting the Windows 7 guest in VMware throws the following errors: No 3D support is available from the host. and Hardware graphics acceleration is not available. I've read through this VMware forum thread, but again the hardware in the post is different (nVidia). I've followed the instructions at this Ask Ubuntu post as closely as possible as the question is nearly the same as mine although my hardware is different. Answer 1 regarding setting mks.gl.allowBlacklistedDrivers = TRUE; in my vmx configuration file causes the VM to crash when it starts. The second answer I followed as closely as possible. I uninstalled VMware, Did sudo apt-get install build-essential linux-headers-$(uname -r) at a terminal, Added the PPA https://launchpad.net/~glasen/+archive/intel-driver, Then at a terminal did sudo apt-get update && sudo apt-get upgrade -y I reinstalled VMware and have the same results: no 3D in guests. I'm getting the feeling that something is awry with the Sandy Bridge driver, but I can't seem to come up with any solutions. Has anyone out there run across this problem also? By the way, the operation of the likes of Solidworks and AutoCad within a Windpws 7 guest does appear to be improved in VMware 9 vs VMware 8 in spite of the fact that 3D support is lacking in the Windows 7 guest. I'd also add that my glxinfo file was nearly identical to the glxinfo file posted at askubuntu.com/questions/181829/…. I had a total of seven minor differences per a comparison using Meld. –

    Read the article

  • Turnkey with LightSwitch

    - by Laila
    Microsoft has long wanted to find a replacement for Microsoft Access. The best attempt yet, which is due out in, or before, September is Visual Studio LightSwitch, with which it is said to be as 'easy as flipping a switch' to use Silverlight to create simple form-driven business applications. It is easy to get confused by the various initiatives from Microsoft. No, this isn't WebMatrix. There is no 'Razor', for this isn't meant for cute little ecommerce sites, but is designed to build simple database-applications of the card-box type. It is more clearly a .NET-based solution to the problem that every business seems to suffer from; the plethora of Access-based, and Excel-based 'private' and departmental database-applications. These are a nightmare for any IT department since they are often 'stealth' applications built by the business in the teeth of opposition from the IT Department zealots. As they are undocumented, it is scarily easy to bring a whole department into disarray by decommissioning a PC tucked under a desk somewhere. With LightSwitch, it is easy to re-write such applications in a standard, maintainable, way, using a SQL Server database, deployed somewhere reasonably safe such as Azure. Even Sharepoint or Windows Communication Foundation can be used as data sources. Oracle's ApEx has taken off remarkably well, and has shaken the perception that, for the business user, Oracle must remain a mystic force accessible only to the priests and acolytes. Microsoft, by comparison had only Access, which was first released in 1992, the year of the Madonna conical bustier. It looks just as dated. Microsoft badly needed an entirely new solution to the same business requirement that led to Access's and Foxpro's long-time popularity, but which had the same allure as ApEx. LightSwitch is sound in its ideas, and comfortingly conventional in its architecture. By giving an easy access to SQL Server databases, and providing a 'thumb and blanket' migration path to Access-heads, LightSwitch seems likely to offer a simple way of pulling more Microsoft users into the .NET community. If Microsoft puts its weight behind it, then it will give some glimmer of hope to the many Silverlight developers that Microsoft is capable of seeing through its .NET revolution.

    Read the article

  • Learn programming backwards, or "so I failed the FizzBuzz test. Now what?"

    - by moraleida
    A Little Background I'm 28 today, and I've never had any formal training in software development, but I do have two higher education degrees equivalent to a B.A in Public Relations and an Executive MBA focused on Project Management. I've worked on those fields for about 6 years total an then, 2,5 years ago I quit/lost my job and decided to shift directions. After a month thinking things through I decided to start freelancing developing small websites in WordPress. I self-learned my way into it and today I can say I run a humble but successful career developing themes and plugins from scratch for my clients - mostly agencies outsourcing some of their dev work for medium/large websites. But sometimes I just feel that not having studied enough math, or not having a formal understanding of things really holds me behind when I have to compete or work with more experienced developers. I'm constantly looking for ways to learn more but I seem to lack the basics. Unfortunately, spending 4 more years in Computer Science is not an option right now, so I'm trying to learn all I can from books and online resources. This method is never going to have NASA employ me but I really don't care right now. My goal is to first pass the bar and to be able to call myself a real programmer. I'm currently spending my spare time studying Java For Programmers (to get a hold on a language everyone says is difficult/demanding), reading excerpts of Code Complete (to get hold of best practices) and also Code: The Hidden Language of Computer Hardware and Software (to grasp the inner workings of computers). TL;DR So, my current situation is this: I'm basically capable of writing any complete system in PHP (with the help of Google and a few books), integrating Ajax, SQL and whatnot, and maybe a little slower than an experienced dev would expect due to all the research involved. But I was stranded yesterday trying to figure out (not Google) a solution for the FizzBuzz test because I didn't have the if($n1 % $n2 == 0) method modulus operator memorized. What would you suggest as a good way to solve this dilemma? What subjects/books should I study that would get me solving problems faster and maybe more "in a programmers way"? EDIT - Seems that there was some confusion about what did I not know to solve FizzBuzz. Maybe I didn't express myself right: I knew the steps needed to solve the problem. What I didn't memorize was the modulus operator. The problem was in transposing basic math to the program, not in knowing basic math. I took the test for fun, after reading about it on Coding Horror. I just decided it was a good base-comparison line between me and formally-trained devs. I just used this as an example of how not having dealt with math in a computer environment before makes me lose time looking up basic things like modulus operators to be able to solve simple problems.

    Read the article

  • Why would more CPU cores on virtual machine slow compile times?

    - by Sid
    [edit#2] If anyone from VMWare can hit me up with a copy of VMWare Fusion, I'd be more than happy to do the same as a VirtualBox vs VMWare comparison. Somehow I suspect the VMWare hypervisor will be better tuned for hyperthreading (see my answer too) I'm seeing something curious. As I increase the number of cores on my Windows 7 x64 virtual machine, the overall compile time increases instead of decreasing. Compiling is usually very well suited for parallel processing as in the middle part (post dependency mapping) you can simply call a compiler instance on each of your .c/.cpp/.cs/whatever file to build partial objects for the linker to take over. So I would have imagined that compiling would actually scale very well with # of cores. But what I'm seeing is: 8 cores: 1.89 sec 4 cores: 1.33 sec 2 cores: 1.24 sec 1 core: 1.15 sec Is this simply a design artifact due to a particular vendor's hypervisor implementation (type2:virtualbox in my case) or something more pervasive across more VMs to make hypervisor implementations more simpler? With so many factors, I seem to be able to make arguments both for and against this behavior - so if someone knows more about this than me, I'd be curious to read your answer. Thanks Sid [edit:addressing comments] @MartinBeckett: Cold compiles were discarded. @MonsterTruck: Couldn't find an opensource project to compile directly. Would be great but can't screwup my dev env right now. @Mr Lister, @philosodad: Have 8 hw threads, using VirtualBox, so should be 1:1 mapping without emulation @Thorbjorn: I have 6.5GB for the VM and a smallish VS2012 project - it's quite unlikely that I'm swapping in/out trashing the page file. @All: If someone can point to an open source VS2010/VS2012 project, that might be a better community reference than my (proprietary) VS2012 project. Orchard and DNN seem to need environment tweaking to compile in VS2012. I really would like to see if someone with VMWare Fusion also sees this (for VMWare vs VirtualBox compartmentalization) Test details: Hardware: Macbook Pro Retina CPU : Core i7 @ 2.3Ghz (quad core, hyper threaded = 8 cores in windows task manager) Memory : 16 GB Disk : 256GB SSD Host OS: Mac OS X 10.8 VM type: VirtualBox 4.1.18 (type 2 hypervisor) Guest OS: Windows 7 x64 SP1 Compiler: VS2012 compiling a solution with 3 C# Azure projects Compile times measure by VS2012 plugin called 'VSCommands' All tests run 5 times, first 2 runs discarded, last 3 averaged

    Read the article

  • Oracle Social Network Developer Challenge: HarQen Nodal

    - by Kellsey Ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. We wrapped the Oracle Social Network Developer Challenge last week at OpenWorld, and this week, I’ll be sharing all the entries. All the teams that entered our challenge did a ton of work and built really interesting integrations with Oracle Social Network, and I want to showcase their hard work and innovative ideas. Today, I give you Nodal from the HarQen (@harqen) team, Kris Gösser (@krisgosser), Jesse Vogt (@jesse_vogt) and Matt Stockton (@mstockton). The guys from HarQen built Nodal to provide a visual way to navigate your connections and conversations in Oracle Social Network and view relationships. Using Nodal, you can: Search through names and profiles in Oracle Social Network. Choose people and view their social graphs in a visually useful way. Expand nodes in the social graph and add that person’s social graph to the Nodal view for comparison. Move nodes around and lock them in place for easier viewing, using a physics engine for movement. Adjust the physics engine properties according to your viewing preferences. Select nodes in the social graph and create a conversation directly based on the selection. Here are some shots of Nodal. They really don’t do the physics engine justice, but maybe the guys at Harqen will post a video of what they did for your viewing pleasure. #gallery-1 { margin: auto; } #gallery-1 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-1 img { border: 2px solid #cfcfcf; } #gallery-1 .gallery-caption { margin-left: 0; }   Nodal’s visuals wowed the judges and the audience, and anyone with a decent-sized social network presence understands the need for good network visualization. Tools like Nodal allow you to discover hidden connections in your network and maximize the value of your weak ties and find mavens, a very important key to getting work done. Thanks to the HarQen team for participating in our challenge. We hope they had a good experience. Look for the details of the other entries this week.

    Read the article

  • Showrooming: What's the big deal?

    - by David Dorf
    There's been lots of chatter recently on how retailers will combat showrooming this holiday season.  Best Buy and Target, for example, plan to price-match certain online sites.  But from my perspective, the whole showrooming concept is overblown.  Yes, mobile phones make is easier to comparison-shop, but consumers have been doing that all along.  Retailers have to work hard to merchandise their stores with the right products at the right price with the right promotions.  Its Retail 101. Yeah ok, many websites don't have to charge tax so they have an advantage, but they also have to cover shipping costs. Brick-and-mortar stores have the opportunity to provide expertise, fit, and instant gratification all of which are pretty big advantages. I see lots of studies that claim a large percentage of shoppers are showrooming.  Now I don't do much shopping, but when I do I rarely see anyone scanning UPC codes in the aisles.  If you dig into those studies, the question is usually something like, "have you used your mobile phone to price compare while shopping in the last year."  Well yeah, I did it once -- out of the 20 shopping trips.  And by the way, the in-store price was close enough to just buy the item.  Based on casual observation and informal surveys of friends, showrooming is not the modus-operandi for today's busy shoppers. I never see people showrooming in grocery stores, and most people don't bother for fashion.  For big purchases like appliances and furniture, I bet most people do their research online before entering the store.  The cases where I've done it was to see if a promotion was in fact a good deal.  Or even to make sure the in-store price is the same as the online price for the same brand. So, if you think you're a victim of showrooming, I suggest you look at the bigger picture.  Are you providing an engaging store experience?  Are you allowing customers to shop the way they want to shop, using various touchpoints?  Are you monitoring the competition to ensure prices are competitive?  Are your promotions attracting the right customers? Hubert Jolly, CEO of Best Buy, recently commented that showrooming might just get more people into his stores. "Once customers are in our stores, they're ours to lose."

    Read the article

  • Anticipating JavaOne 2012 – Number 17!

    - by Janice J. Heiss
    As I write this, JavaOne 2012 (September 30-October 4 in San Francisco, CA) is just over a week away -- the seventeenth JavaOne! I’ll resist the impulse to travel in memory back to the early days of JavaOne. But I will say that JavaOne is a little like your birthday or New Year’s in that it invites reflection, evaluation, and comparison. It’s a time when we take the temperature of Java and assess the world of information technology generally. At JavaOne, insight and information flow amongst Java developers like no other time of the year.This year, the status of Java seems more secure in the eyes of most Java developers who agree that Oracle is doing an acceptable job of stewarding the platform, and while the story is still in progress, few doubt that Oracle is engaging strongly with the Java community and wants to see Java thrive. From my perspective, the biggest news about Java is the growth of some 250 alternative languages for the JVM – from Groovy to Jython to JRuby to Scala to Clojure and on and on – offering both new opportunities and challenges. The JVM has proven itself to be unusually flexible, resulting in an embarrassment of riches in which, more and more, developers are challenged to find ways to optimally mix together several different languages on projects.    To the matter at hand -- I can say with confidence that Oracle is working hard to make each JavaOne better than the last – more interesting, more stimulating, more networking, and more fun! A great deal of thought and attention is being devoted to the task. To free up time for the 475 technical sessions/Birds of feather/Hands-on-Labs slots, the Java Strategy, Partner, and Technical keynotes will be held on Sunday September 30, beginning at 4:00 p.m.   Let’s not forget Java Embedded@JavaOne which is being held Wednesday, Oct. 3rd and Thursday, Oct. 4th at the Hotel Nikko. It will provide business decision makers, technical leaders, and ecosystem partners important information about Java Embedded technologies and new business opportunities.   This year's JavaOne theme is “Make the Future Java”. So come to JavaOne and make your future better by:--Choosing from 475 sessions given by the experts to improve your working knowledge and coding expertise --Networking with fellow developers in both casual and formal settings--Enjoying world-class entertainment--Delighting in one of the world’s great cities (my home town) Hope to see you there! Originally published on blogs.oracle.com/javaone.

    Read the article

  • How to avoid oscillation by async event based systems?

    - by inf3rno
    Imagine a system where there are data sources which need to be kept in sync. A simple example is model - view data binding by MVC. Now I intend to describe these kind of systems with data sources and hubs. Data sources are publishing and subscribing for events and hubs are relaying events to data sources. By handling an event a data source will change it state described in the event. By publishing an event the data source puts its current state to the event, so other data sources can use that information to change their state accordingly. The only problem with this system, that events can be reflected from the hub or from the other data sources, and that can put the system into an infinite oscillation (by async or infinite loop by sync). For example A -- data source B -- data source H -- hub A -> H -> A -- reflection from the hub A -> H -> B -> H -> A -- reflection from another data source By sync it is relatively easy to solve this issue. You can compare the current state with the event, and if they are equal, you don't change the state and raise the same event again. By async I could not find a solution yet. The state comparison does not work by async event handling because there is eventual consistency, and new events can be published in an inconsistent state causing the same oscillation. For example: A(*->x) -> H -> B(y->x) -- can go parallel with B(*->y) -> H -> A(x->y) -- so first A changes to x state while B changes to y state -- then B changes to x state while A changes to y state -- and so on for eternity... What do you think is there an algorithm to solve this problem? If there is a solution, is it possible to extend it to prevent oscillation caused by multiple hubs, multiple different events, etc... ? update: I don't think I can make this work without a lot of effort. I think this problem is just the same as we have by syncing multiple databases in a distributed system. So I think what I really need is constraints if I want to prevent this problem in an automatic way. What constraints do you suggest?

    Read the article

  • How best to look up objects by label?

    - by dsollen
    I am writing the server backed by a pre-written API. I'm going to get a number of strings representing ports, signals, paths, etc etc etc. I need to look up the object associated with a given label, these objects are all in memory (no sql magic to do this for me). My question is, how best do I associate a given unique label with the mutable object it represents? I have enough objects that looking through every signal or every port to find the one that matches is possible, but may be slightly too slow. To be honest the direct 'look at every object' method is probably good enough for so small a body of objects and anything else is premature optimization, but I still am curious what the proper solution would be if I thought my signals were going to grow a bit larger. As I see it there are two options available. First would be to to create a 'store' that is a simple map between object and label. I could have it so that every time I call addObject the object is automatically saved into a hashmap or the like. This works, but relies on my properly adding and deleting each object so the map doesn't grow indefinitely. The biggest issue to me is that this involves having some hidden static map in my ModelObject class that just feels...wrong somehow. The other option is to have some method that can interpret the labels. All of these labels are derived from the underlying objects. So I can look at the signal label, for instance, and say "these 20 characters are the port" to figure out what port I need. This would allow me to quickly figure out what I need. However, if the label method is changed the translateLabelToObject method needs to be updated as well or everything breaks. Which solution is cleaner, or possibly a cleaner solution than either of above? For the record I'm working with sufficient number of variables to make direct comparison a little slow, but not enough to be concerned about memory overhead, written in java. All objects that have labels I need to look up extend the same parent class.

    Read the article

  • What is there in Win 7 Pro (or Ultimate) that is not there in Home Premium? - Especially considering this situation..

    - by Senthil
    I want to know the REAL difference between Windows 7 Home Premium and Professional/Utimate. In India, the cost of different versions: Ultimate - 11,200 INR Professional - 10,700 INR Home Premium - 6,600 INR The absolute cost of the first two is so high to me that the difference (500 INR) doesn't matter. So to me there is really no choice between the first two - If I decide to buy the Professional version, I'd rather go for Ultimate itself. What I want to know is, whether Home Premium is enough for my needs. I tried searching for comparison but many look like just marketing junk from MS. They are short and vague. According to this page, the major differences between Pro and HomePremium are Run many Windows XP productivity programs in Windows XP Mode. Connect to company networks easily and more securely with Domain Join. You can do both in Pro but not in Home Premium. I intend to use my Windows 7 for a small business - just starting up. So I'll be dealing with the following: All kinds of development tools, servers Very important - I will run Virtual Machine Software (MS VPC or VMWare or Sun VirtualBox etc..) My system will be acting as the server for most purposes till I can afford dedicated servers. Connect the system to a variety of network devices (PCs, Printers, etc..) Run productivity, business and financial apps Any other small software startup business requirement that I haven't thought of yet. Professional (and Ultimate) is twice as expensive as Home Premium. So it'd be great if someone can point out the things you cannot do with Home Premium, when you use it like I explained above, so that I can make a decision about which one to buy. I need some real-life experiences so that I can make an informed decision - not a decision based on marketing junk.

    Read the article

  • Windows Clients: Windows or Linux Domain Controller?

    - by Ramon Marco Navarro
    I'm planning to set up a domain controller for our small computer laboratory. I'm a little confused as to what operating system to use for our domain controller. What's in the lab: The lab has 25 units running a mix of Windows 7 and Windows XP. The domain controller will only have 2GB of RAM running a C2D E7200. (Is this enough?) What we want: The Domain Controller will also be running a git server. The Domain Controller will also be used as a general development machine (mostly Java, PHP). A way to centralize the updates for the windows clients, so that they won't have to download the same patches from the remote site. The machines would just query them from the local domain controller and get the updates from there. Our head recommended that I virtualize a Windows Server 2008 system under a Linux host and use the former as a domain controller and the latter for development or the other way around. A comparison of the advantages and disadvantages of using a Linux distribution or Windows Server 2008 in this situation would also be appreciated. As you may have noticed by now, I'm kinda new to setting up a domain so I hope you guys will be able to help me. Thank you.

    Read the article

  • Chunking large rsync transfers?

    - by Gabe Martin-Dempesy
    We use rsync to update a mirror of our primary file server to an off-site colocated backup server. One of the issues we currently have is that our file server has 1TB of mostly smaller files (in the 10-100kb range), and when we're transferring this much data, we often end up with the connection being dropped several hours into the transfer. Rsync doesn't have a resume/retry feature that simply reconnects to the server to pickup where it left off -- you need to go through the file comparison process, which ends up being very length with the amount of files we have. The solution that's recommended to get around is to split up your large rsync transfer into a series of smaller transfers. I've figured the best way to do this is by first letter of the top-level directory names, which doesn't give us a perfectly even distribution, but is good enough. I'd like to confirm if my methodology for doing this is sane, or if there's a more simple way to accomplish the goal. To do this, I iterate through A-Z, a-z, 0-9 to pick a one character $prefix. Initially I was thinking of just running rsync -av --delete --delete-excluded --exclude "*.mp3" "src/$prefix*" dest/ (--exclude "*.mp3" is just an example, as we have a more lengthy exclude list for removing things like temporary files) The problem with this is that any top-level directories in dest/ that are no longer present present on src will not get picked up by --delete. To get around this, I'm instead trying the following: rsync \ --filter 'S /$prefix*' \ --filter 'R /$prefix*' \ --filter 'H /*' \ --filter 'P /*' \ -av --delete --delete-excluded --exclude "*.mp3" src/ dest/ I'm using the show and hide over include and exclude, because otherwise the --delete-excluded will delete anything that doesn't match $prefix. Is this the most effective way of splitting the rsync into smaller chunks? Is there a more effective tool, or a flag that I've missed, that might make this more simple?

    Read the article

  • Outlook 2007 OST File Indexing and OneNote 2007 Indexing are Broken

    - by Matt
    I'm running Outlook 2007 under Windows 7 Home Premium RTM. My OST file was previously being properly indexed but eventually searches significantly slowed down so I suspected a problem. Searching and indexing appears broken in OneNote 2007 as well as search time is now significantly longer. I brought up the Outlook 2007 Search Options dialog and noticed that my mailbox (running from an Exchange 2007 server) wasn't listed in the "Index messages in these data files:" list box. Next I ran the Windows "Find and fix problem with windows search" wizard which reported no errors. Then I brought up the Windows Indexing Options dialog which shows Outlook listed (as shown here): then clicked Advanced and Rebuilt the index. No dice - the listbox in the Outlook 2007 dialog still didn't show my mailbox. When I clicked the Modify button in the Indexing Options dialog I see the following: When I hover over the "oneindex://..." entry, the alt text indicates "This location is currently unavailable". When I delete it and rebuild the index, this entry returns. UPDATE: Comparison of the last screenshot above with a working PC shows that on the broken PC, the lower half of the dialog lists Outlook but neither Outlook or OneNote are showing in the upper half. The working PC has Outlook and OneNote in both parts of the dialog.

    Read the article

  • Hardware Requirements & Tuning - Flash Media Server 3.5 Interactive

    - by Anthony Kanago
    I am trying to spec out a server to purchase (physically, not rented from someone like softlayer.com) to run an intranet instace of Flash Media Server 3.5 Interactive. In general, the server will likely be fielding somewhere on the order of 400 connections at a time at the upper limit. Of course, should this increase, we don't want to be stuck. While the decision is not final, we will likely be running the server on Red Hat rather than Windows. The server will be run on gigabit ethernet. I have two related questions: What sort of hardware would I need realistically to support this? What advice can you offer for settings in tuning FMS/the OS to be performant to this level? We are looking for a bare minimum that will run this effectively to save on costs. Realistically, the average number of connections will be fairly low (50-150) by comparison with that upper limit estimate. To reiterate: we just want to be cautious in not getting caught when we need more power, but we also need a low-cost solution (doesn't everyone?) and that may take priority. Windows and RedHat are the two officially supported operating systems. Since FMS is stated to be 32-bit only, I'm sticking with a 32-bit OS. The hardware requirements listed by Adobe on their website are: 3.2GHz Intel® Pentium® 4 processor (dual Intel Xeon® or faster recommended) 2GB of RAM (4GB recommended) 1Gb Ethernet card So what realistically do I need for those sorts of connection numbers, and what can I due to tune things up to get more out of less hardware? Thanks!

    Read the article

  • Solaris TCP stack tuning

    - by disserman
    We have a large web project (about 2-3k requests per second), using haproxy (http://haproxy.1wt.eu/) as a frontend and load balancer between the java application servers. The frontend (haproxy) is running on Linux but we are going to migrate it to the Solaris 10 as all our other servers are running under Solaris. After switching a traffic I see the two things: a) the web site became loading slower (5-10 seconds with images in comparison to 2-3 seconds on Linux) b) sometimes haproxy fails to perform a "lifecheck" (get a special web page and analyze http response code) due to the socket timeout. After switching traffic back to Linux everything is okay. I've tried to tune all params I found in /dev/tcp but no progress. I believe the problem is in some open socket limitations. If someone can point me to the answer, I would be greatly appreciated. p.s. haproxy is running under Xen DomU on Linux (Kernel 2.6.18, Debian 5), under zone on Solaris (10 u8). the only thing we did on Linux is increasing of ip_conntrack_max (I believe Solaris option tcp_conn_req_max_q is the equivalent).

    Read the article

  • Cannot access certain URL on my wireless

    - by dehmann
    Problem: On my wireless network at home, there is one URL that I just cannot access with my browser: http://research.microsoft.com/ I have no problems with the Internet connection otherwise. But on that address I just get The connection was reset The connection to the server was reset while the page was loading. from Firefox. I am using a DSL modem (Westell) and Linksys wireless router (using DHCP). When I use my neighbor's wireless connection I can access the microsoft site without a problem. Additional technical details: But with my connection, here is what I get from nslookup. It is weird: It first cannot find the address, but after I look up another address it can find it: $ nslookup research.microsoft.com ;; connection timed out; no servers could be reached $ nslookup google.com Non-authoritative answer: Name: google.com Address: 72.14.204.104 Name: google.com Address: 72.14.204.147 Name: google.com Address: 72.14.204.99 Name: google.com Address: 72.14.204.103 $ nslookup research.microsoft.com Non-authoritative answer: Name: research.microsoft.com Address: 131.107.65.14 But even after nslookup finds it Firefox still cannot access it. Here is what traceroute says: $ traceroute http://research.microsoft.com/ traceroute: Warning: http://research.microsoft.com/ has multiple addresses; using 8.15.7.117 traceroute to http://research.microsoft.com/ (8.15.7.117), 64 hops max, 40 byte packets 1 dslrouter.westell.com (1XX.XXX.X.X) 4.515 ms 2.760 ms 3.072 ms 2 * * * Traceroute just to the IP: $ traceroute 131.107.65.14 traceroute to 131.107.65.14 (131.107.65.14), 64 hops max, 40 byte packets 1 dslrouter.westell.com (1XX.XXX.X.X) 11.912 ms 2.684 ms 2.808 ms 2 * * * Comparison: Traceroute to google.com IP: $ traceroute 72.14.204.99 traceroute to 72.14.204.99 (72.14.204.99), 64 hops max, 40 byte packets 1 dslrouter.westell.com (1XX.XXX.X.X) 6.428 ms 6.981 ms 117.099 ms 2 * * * Any comments / help?

    Read the article

  • Dell PE2950 - slow IO rates for writing and reading locally

    - by OrenM
    I'm having a serious issue with dell server PE2950. The server has really slow IO rates, so slow that I'm not able to use it anymore I tried few things to solve this: changing disks to new disks (configured them as raid1) changing perc card + perc cables reinstalling the OS of course, had to cause of changing of disks, centos 5.5 x64bit firmware update to everything virtual disks policy: No Read Ahead,Write Back, disk cache policy disabled. openmanage doesn't alert about anything, also i ran dell's diag tests, everything passed, also dell didn't see anything in deset log. dell offered to reseat everything, including the cpu, we did that as well, still io rates are slow I have several PE2950 servers, and I never had such a thing with any of those. All have similar or exact hardware as this one, all configured the same, with the same os centos 5.5 x64, same disks, same raid, same policy. Just for comparison: the problematic PE2950 server: [root@bad ~]# time sh -c "dd if=/dev/zero of=/tmp/ddfile bs=8k count=200000 && sync" 200000+0 records in 200000+0 records out 1638400000 bytes (1.6 GB) copied, 27.7946 seconds, 58.9 MB/s real 0m33.968s user 0m0.531s sys 0m26.000s good PE2950 server (with the exact same hardware): [root@good ~]# time sh -c "dd if=/dev/zero of=/tmp/ddfile bs=8k count=200000 && sync" 200000+0 records in 200000+0 records out 1638400000 bytes (1.6 GB) copied, 3.19999 seconds, 512 MB/s real 0m7.694s user 0m0.053s sys 0m4.057s Hopefully you will have an idea what can cause the problem.

    Read the article

  • LTO 2 tape performance in LTO 3 drive

    - by hmallett
    I have a pile of LTO 2 tapes, and both an LTO 2 drive (HP Ultrium 460e), and an autoloader with an LTO 3 drive in (Tandberg T24 autoloader, with a HP drive). Performance of the LTO 2 tapes in the LTO 2 drive is adequate and consistent. HP L&TT tells me that the tapes can be read and written at 64 MB/s, which seems in line with the performance specifications of the drive. When I perform a backup (over the network) using Symantec Backup Exec, I get about 1700 MB/min backup and verify speeds, which is slower, but still adequate. Performance of the LTO 2 tapes in the LTO 3 drive in the autoloader is a different story. HP L&TT tells me that the tapes can be read at 82 MB/s and written at 49 MB/s, which seems unusual at the write speed drop, but not the end of the world. When I perform a backup (over the network) using Symantec Backup Exec though, I get about 331 MB/min backup speed and 205 MB/min verify speeds, which is not only much slower, but also much slower for reads than for writes. Notes: The comparison testing was done on the same server, SCSI card and SCSI cable, with the same backup data set and the same tape each time. The tape and drives are error-free (according to HP L&TT and Backup Exec). The SCSI card is a U160 card, which is not normally recommended for LTO 3, but we're not writing to LTO 3 tapes at LTO 3 speeds, and a U320 SCSI card is not available to me at the moment. As I'm scratching my head to determine the reason for the performance drop, my first question is: While LTO drives can write to the previous generation LTO tapes, does doing so normally incur a performance penalty?

    Read the article

  • Xorg eating up too much RAM on Ubuntu 9.10 box

    - by Yang
    Xorg is eating up 444MB of 2GB total RAM on my Ubuntu 9.10 x86_64 machine with nvidia drivers installed for the nvidia G86 (GeForce 8300 GS). top shows: top - 18:21:41 up 6 days, 2:40, 9 users, load average: 0.46, 1.12, 1.22 Tasks: 266 total, 3 running, 262 sleeping, 1 stopped, 0 zombie Cpu(s): 8.4%us, 2.0%sy, 0.0%ni, 89.1%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 2055736k total, 1965136k used, 90600k free, 3952k buffers Swap: 979924k total, 979908k used, 16k free, 102636k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1432 root 20 0 1154m 442m 7492 S 8 22.0 32:56.97 Xorg 18462 yang 20 0 1001m 219m 8356 S 0 10.9 5:13.25 chrome 24099 yang 20 0 865m 83m 13m S 0 4.2 0:06.91 chrome xrestop shows: xrestop - Display: :0.0 Monitoring 47 clients. XErrors: 0 Pixmaps: 40430K total, Other: 142K total, All: 40573K total res-base Wins GCs Fnts Pxms Misc Pxm mem Other Total PID Identifier 1c00000 21 46 1 19 697 9128K 18K 9146K 3169 x-nautilus-desktop 1000000 4 3 0 17 194 9000K 4K 9004K 3134 gnome-settings-daemon 1600000 51 2 1 25 1100 7648K 28K 7676K ? compiz For comparison, here's my other Ubuntu box, which also has compiz etc. enabled but with ATI RV370 (Radeon X300SE): top - 18:18:18 up 58 days, 4:27, 9 users, load average: 0.00, 0.00, 0.00 Tasks: 224 total, 1 running, 223 sleeping, 0 stopped, 0 zombie Cpu(s): 0.3%us, 0.3%sy, 0.0%ni, 98.8%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1024964k total, 987124k used, 37840k free, 247012k buffers Swap: 2048276k total, 94296k used, 1953980k free, 264744k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 24324 yang 20 0 61936 35m 6364 S 0 3.5 4:35.84 nxagent 1768 ntop 20 0 190m 32m 5388 S 1 3.2 283:36.15 ntop 1178 root 20 0 60588 29m 1788 S 0 3.0 5:48.89 console-kit-dae ... 1315 root 20 0 343m 4956 4020 S 0 0.5 3:43.87 Xorg Any ideas on how to get to the bottom of this? (i.e. not "Log out"/"Reboot") Thanks in advance.

    Read the article

  • Basic multicast network performance problems

    - by davedavedave
    I've been using mpong from 29west's mtools package to get some basic idea of multicast latency across various Cisco switches: 1Gb 2960G, 10Gb 4900M and 10Gb Nexus N5548P. The 1Gb is just for comparison. I have the following results for ~400 runs of mpong on each switch (sending 65536 "ping"-like messages to a receiver which then sends back -- all over multicast). Numbers are latencies measured in microseconds. Switch Average StdDev Min Max 2960 (1Gb) 109.68463 0.092816 109.4328 109.9464 4900M (10Gb) 705.52359 1.607976 703.7693 722.1514 NX 5548(10Gb) 58.563774 0.328242 57.77603 59.32207 The result for 4900M is very surprising. I've tried unicast ping and I see the 4900 has ~10us higher latency than the N5548P (average 73us vs 64us). Iperf (with no attempt to tune it) shows both 10Gb switches give me 9.4Gbps line speed. The two machines are connected to the same switch and we're not doing any multicast routing. OS is RHEL 6. 10Gb NICs are HP 10GbE PCI-E G2 Dual-port NICs (I believe they are rebranded Mellanox cards). The 4900 switch is used in a project with tight access control so I'm waiting for approval before I can access it and check the config. The other two I have full access to configure. I've looked at the Cisco document[2] detailing differences between NX-OS and IOS w.r.t multicast so I've got some ideas to try out but this isn't an area where I have much expertise. Does anyone have any idea what I should be looking at once I get access to the switch? [1] http://docwiki.cisco.com/wiki/Cisco_NX-OS/IOS_Multicast_Comparison

    Read the article

  • Which internet scenario would be better?

    - by JL
    I currently have an 8mbps (down) / 512kbps (up) telephone ADSL solution. I must say the reliability is excellent, and up until now its been the fastest connection I could get because I don't live in a cable zone. The real speed of my connection is around 7mbps, but sometimes I manage to get the full 8mbps. I use my connection for work, so it needs to be at least 99% reliable. Recently I was told by a guy who lives up the road that he has a wireless connection with an external antenna and his speeds are 20mbps / 512kbps - he's also paying about 1/2 of what I pay for my wired telephone connection. My question is, is wireless internet good enough for a power user who uses his connection for work 8 hours a day, including VPNing into servers remotely. Besides this I also enjoy playing the odd network game, not a WoW freak, but sometimes I do pick up the odd MMORPG and at times do indulge in some semi heavy gaming sprees. Will this wireless latency drive me crazy and seem slow in comparison? Will it be reliable enough, I also live in an area that snows heavily in winter. I guess its a question of - should I go wireless or not. I've only had 1 wireless connection before and that was years ago using iBurst technology and I remember it was terrible for VPN, but I guess the technology might have been improved since then? What do you guys think?

    Read the article

  • Bad results converting PDF to EPS on Linux

    - by Tim
    I'm having some trouble converting PDFs (created by Adobe Illustrator on a Mac) to EPS. I have tried several things but I am wondering if there is a better option. The following list is ordered by decreasing quality: inkscape --export-area-page --export-eps=out.eps in.pdf using the graphical program Inkscape works best, but is a bit slow; pdftops -eps in.pdf out.eps uses Poppler and works good and is fast; pdf2ps in.pdf out.eps uses ghostscript and works ok for simple documents; convert in.pdf out.eps uses ImageMagick and always rasterizes the image. I haven't tested the following: acroread -toPostScript use acroread (Linux only) Some issues I've found: Transparency is not supported in EPS, but instead of flattening the layers, most programs rasterize the image producing big files and ugly graphs. Inkscape does this best by only rasterizing the unsupported area. Gradients are rendered properly by Inkscape, but Poppler somehow chops up the gradient into many shapes of different colors. Greek symbols are seemingly not supported by Ghostscript and are rasterized (using pdf2ps). What are your experiences for this kind of task? Did I forgot certain programs and/or command line options that improve quality? I found some posts on this, but not a (thorough) comparison of possibilities, please correct me if I'm wrong. Related posts How to convert PDF to EPS? on TeX

    Read the article

  • Windows Bluescreen - atikmpag.sys

    - by Mochan
    Information Name: atikmpag.sys bluescreen (BSOD or BlueScreen of Death) Error code: 0x00000116 Appears when: Playing games, watching videos Can be reproduced: Yes Cause: Graphics Card is the main assumption System Specifications Before we begin - I will inform you of my specifications. OS: Windows 7 x64 Home Edition Model: Dell Inspiron 15R Special Edition (aka Inspiron 7520) (Add 2GB of RAM to the model linked) Hard Drive: 1TB CPU: Intel Quad-Core i7 Sandy Bridge (I think) Processor at 2.10GHz (I think it can be clocked to 3GHz?) RAM: 6GB (I think 1 x 4GB and 1 x 2GB) Display: 15.6" HD (1366x768) Graphics: AMD Radeon HD 7500M 2GB Details So now that you know some basics about my computer, I'll get to the problem. Being an Ubuntu user I hardly use Windows, but occasionally I do. Like to run Skyrim and other games incompatible with Linux and WINE. The new Sims 3 Seasons patch is also now not supported. When playing these two games and other ones, theoretically. I have also heard others saying that while watching HD movies and video series it also happens. While watching the bluescreen as it happens, I see it is the 'atikmpag.sys' error. I have not installed much and nothing significant. I think I have downloaded Skyrim, Firefox and The Sims 3. I haven't done much more... since Ubuntu is definitely the best in comparison! (No hate, just a joke :P). I can reproduce it easily (just by running a game for less than a minute). It is always there each time, but it's never at a specific time or anything. So far I have found that it may be caused by lack of power to the graphics card, or it may be damaged or fried. Since I've had the computer for a mere 4 months (and have had other problems with it also). I have contacted Dell but they are useless beyond belief. Anyone with any information, solutions or details are encouraged to share your knowledge, as it would be immensely appreciated.

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >