Search Results

Search found 17621 results on 705 pages for 'just my correct opinion'.

Page 301/705 | < Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >

  • How to get partition information from non-booting server?

    - by gravyface
    Need to manually rebuild a mirrored array on a server and am in the process of reinstalling SBS 2003 on it. However, it's a Dell server, and know that there's the Dell FAT32 diagnostics partition, a system partition, and a data partition, but do not know the size of each. Planning on reinstalling SBS 2003, all applications on the server, and then doing a System State restore, but figured that not having the correct partitions will cause some grief: am I right? Almost thinking that the size of the partitions shouldn't matter, but not positive. Question: should I care about the size of the partition? If so, how can I get this partition information from a non-booting drive? We have an Acronis image of the one working disk and the partitions are mounted/viewable in Explorer on a workstation, but I'm not sure where the Logical Disk Manager/Disk Management data is stored and/or if there's a way to retrieve it without having a working Windows installation.

    Read the article

  • Choosing Technology To Include In Software Design

    How many of us have been forced to select one technology over another when designing a new system? What factors do we and should we consider? How can we ensure the correct business decision is made? When faced with this type of decision it is important to gather as much information possible regarding each technology being considered as well as the project itself. Additionally, I tend to delay my decision about the technology until it is ultimately necessary to be made. The reason why I tend to delay such an important design decision is due to the fact that as the project progresses requirements and other factors can alter a decision for selecting the best technology for a project. Important factors to consider when making technology decisions: Time to Implement and Maintain Total Cost of Technology (including Implementation and maintenance) Adaptability of Technology Implementation Team’s Skill Sets Complexity of Technology (including Implementation and maintenance) orecasted Return On Investment (ROI) Forecasted Profit on Investment (POI) Of the factors to consider the ROI and POI weigh the heaviest because the take in to consideration the other factors when calculating the profitability and return on investments.For a real world example let us consider developing a web based lead management system for a new company. This system can either be hosted on Microsoft Windows based web server or on a Linux based web server. Important Factors for this Example Implementation Team’s Skill Sets Member 1  Skill Set: Classic ASP, ASP.Net, and MS SQL Server Experience: 10 years Member 2  Skill Set: PHP, MySQL, Photoshop and MS SQL Server Experience: 3 years Member 3  Skill Set: C++, VB6, ASP.Net, and MS SQL Server Experience: 12 years Total Cost of Technology (including Implementation and maintenance) Linux Initial Year: $5,000 (Random Value) Additional Years: $3,000 (Random Value) Windows Initial Year: $10,000 (Random Value) Additional Years: $3,000 (Random Value) Complexity of Technology Linux Large Learning Curve with user driven documentation Estimated learning cost: $30,000 Windows Minimal based on Teams skills with Microsoft based documentation Estimated learning cost: $5,000 ROI Linux Total Cost Initial Total Cost: $35,000 Additional Cost $3,000 per year Windows Total Cost Initial Total Cost: $15,000 Additional Cost $3,000 per year Based on the hypothetical numbers it would make more sense to select windows based web server because the initial investment of the technology is much lower initially compared to the Linux based web server.

    Read the article

  • SQL2008 Won't Work After SP1 Install?

    - by leen3o
    I have SQL2008 installed on my Win2008 server, and its been working fine - I have sites running using SQL databases etc.. I thought I would install SQL2008 SP1, but after install I cannot connect to SQL via Management Studio, and in configuration manager I cannot start SQL? TITLE: Connect to Server Cannot connect to #MYINSTANCENAME#. ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=-1&LinkId=20476 BUTTONS: OK I'm not really a techie so a bit stuck?? any ideas??

    Read the article

  • Self Service Reporting With PowerPivot

    - by blakmk
    There are so many cool new features in Sql 2008 release 2 it was difficult for me to pick a topic for T-SQL Tuesday . But the one that I am now a secret fan of, I once resented for its creation. Let me explain, for years I have encountered reporting systems cobbled together in tools like Access and Excel built by "database hobbyists" who had no formal training in database design or best practices. They would take their monstrosities as far as they could go before ultimatley it stopped working or the person that wrote it left the company. At that point it would become the resident DBA's problem to support it as a Live application. So when I first heard of Power Pivot, a sense of Deja Vu overtook me and I felt like the guy in the Ausin Powers movie , knowing the inevitable is coming but somehow unsure how to get out of the way. But when I eventually saw it in action, I quickly realised that it is a very powerful tool. It has a much smaller "time to market" than traditional BI architectures. Combined with the new features of Excel, some pretty impressive dashboards can be produced.Of course PowerPivot is not a magic bullet and along with potential scalability issues there are the usual issues such as master data management and data quality that cannot be overcome easily with power pivot. As a tool though, it has potential. Traditional BI is expensive, both in terms of time and the amount of resources it takes to deliver the system. The time lag between an analyst or a commercial accountant requesting reports and the report being delivered can make a huge commercial difference. I have observed companies where empowered end users become extremely productive when allowed to plough in to various disperate datasets. It may not be the correct way or the most sustainable but its cheap and quick. In these times when budgets are being slashed and we are forced to deliver more with less, why not empower the end user in a tool that is designed for exactly this task.... @blakmk  

    Read the article

  • iptables unresolved dependencies

    - by tertle
    I'm trying to setup OpenVPN Access Server on a VPS running ubuntu 9.10 for a friend so she can play games from her uni campus. The problem is I keep running into this error when trying to start openvpn. Service deferred error: IPTablesServiceBase: failed to run iptables-restore [status=1]: ['FATAL: Could not load /lib/modules/2.6.18-028stab070.14/modules.dep: No such file or directory', 'FATAL: Could not load /lib/modules/2.6.18-028stab070.14/modules.dep: No such file or directory', 'iptables-restore: line 46 failed']: internet/base:1175,internet/base:752,internet/process:45,internet/process:306,internet/_baseprocess:48,internet/process:775,internet/_baseprocess:60,svc/pp:116,svc/svcnotify:26,internet/defer:238,internet/defer:307,internet/defer:323,sagent/ipts:105,sagent/ipts:39,util/error:52,util/error:32 service failed to start due to unresolved dependencies: set(['user', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['user', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['iptables_openvpn']) Now I've already got my provider to enabled the TUN/TAP device driver and I checked this using # cat /dev/net/tun Which returned “File descriptor in bad state” Which I believe means it's enabled. After extensive searching, I've been unable to find any solution other than people suggesting to make sure TUN/TAP device driver is enabled. Any ideas on how to solve my issue? I'm not very experience with linux and I feel in over my head here so any advice is greatly appreciated. --edit-- Just stumbled across this Not sure how I missed it earlier. I believe I need to get modprobe ipt_mark & modprobe ipt_MARK run on the hostnode by my provider. Is this correct and something I should try get done.

    Read the article

  • JustCode Provides Reflector Alternative

    - by Joe Mayo
    If you've been a loyal Reflector user, you've probably been exposed to the debacle surrounding RedGate's decision to no longer offer a free version.  Since then, the race has begun for a replacement with a provider that would stand by their promises to the community.  Mono has an ongoing free alternative, which has been available for a long time.  However, other vendors are stepping up to the plate, with their own offerings. If Not Reflector, Then What? One of these vendors is Telerik.  In their recent Q1 2011 release of JustCode, Telerik offers a decompilation utility rivaling what we've become accustomed to in Reflector.  Not only does Telerik offer a usable replacement, but they've (in my opinion), produced a product that integrates more naturally with visual Studio than any other product ever has.  Telerik's decompilation process is so easy that the accompanying demo in this post is blindingly short (except for the presence of verbose narrative). If you want to follow along with this demo, you'll need to have Telerik JustCode installed.  If you don't have JustCode yet, you can buy it or download a trial at the Telerik Web site . A Tall Tale; Prove It! With JustCode, you can view code in the .NET Framework or any other 3rd party library (that isn't well obfuscated).  This demo depends on LINQ to Twitter, which you can download from CodePlex.com and create a reference or install the package online as described in my previous post on NuGet.  Regardless of the method, you'll have a project with a reference to LINQ to Twitter.  Use a Console Project if you want to follow along with this demo. Note:  If you've created a Console project, remember to ensure that the Target Framework is set to .NET Framework 4.  The default is .NET Framework 4 Client Profile, which doesn't work with LINQ to Twitter.  You can check by double-clicking the Properties folder on the project and inspecting the Target Framework setting. Next, you'll need to add some code to your program that you want to inspect. Here, I add code to instantiate a TwitterContext, which is like a LINQ to SQL DataContext, but works with Twitter: var l2tCtx = new TwitterContext(); If you're following along add the code above to the Main method, which will look similar to this: using LinqToTwitter; namespace NuGetInstall { class Program { static void Main(string[] args) { var l2tCtx = new TwitterContext(); } } } The code above doesn't really do anything, but it does give something that I can show and demonstrate how JustCode decompilation works. Once the code is in place, click on TwitterContext and press the F12 (Go to Definition) key.  As expected, Visual Studio opens a metadata file with prototypes for the TwitterContext class.  Here's the result: Opening a metadata file is the normal way that Visual Studio works when navigating to the definition of a type where you don't have the code.  The scenario with TwitterContext happens because you don't have the source code to the file.  Visual Studio has always done this and you can experiment by selecting any .NET type, i.e. a string type, and observing that Visual Studio opens a metadata file for the .NET String type. The point I'm making here is that JustCode works the way Visual Studio works and you'll see how this can make your job easier. In the previous figure, you only saw prototypes associated with the code. i.e. Notice that the default constructor is empty.  Again, this is normal because Visual Studio doesn't have the ability to decompile code.  However, that's the purpose of this post; showing you how JustCode fills that gap. To decompile code, right click on TwitterContext in the metadata file and select JustCode Navigate -> Decompile from the context menu.  The shortcut keys are Ctrl+1.  After a brief pause, accompanied by a progress window, you'll see the metadata expand into full decompiled code. Notice below how the default constructor now has code as opposed to the empty member prototype in the original metadata: And Why is This So Different? Again, the big deal is that Telerik JustCode decompilation works in harmony with the way that Visual Studio works.  The navigate to functionality already exists and you can use that, along with a simple context menu option (or shortcut key) to transform prototypes into decompiled code. Telerik is filling the the Reflector/Red Gate gap by providing a supported alternative to decompiling code.  Many people, including myself, used Reflector to decompile code when we were stuck with buggy libraries or insufficient documentation.  Now we have an alternative that's officially supported by a company with an excellent track record for customer (developer) service, Telerik.  Not only that, JustCode has several other IDE productivity tools that make the deal even sweeter. Joe

    Read the article

  • How soon does nginx's token bucket replenish when limiting at requests per minute?

    - by Michael Gorsuch
    Hi all. We've decided that we want to experiment and limit requests per minute instead of requests per second on our sites. However, I am confused by the burst parameter in this context. I am under the impression that when you use the 'nodelay' flag, the rate limiting facility acts like a token bucket instead of a leaky bucket. That being the case, the bucket size is equal to the burst parameter, and every time that you violate the policy (say 1 req/s), you have to put a token in the bucket. Once the bucket is full (being equal to the burst setting), you are given a 503 error page. I am also under the impression that once a violator stops going against the policy, a token is removed from the bucket at a rate of 1 token/s allowing him to regain access to the site. Assuming that I have the above correct, my question is what happens when I start regulating access per minute? If we chose 60 requests per minute, at what rate does the token bucket replenish?

    Read the article

  • please help me understand libraries/includes

    - by fiftyeight
    I'm trying to understand how libraries work. for example I downloaded a tarball and extracted it. Now I do "./configure", it searches in pre-defined directories from what I understand for certain library files. What does it do then? it creates a makefile, and the makefile contains the paths to these libraries? than I do "make", it complies the source code and hard-codes the locations of the libraries? am I correct? I do not really understand if libraries are files with pre-defined paths or the OS somehow gives access to the libraries through system calls. another example, I complied something on my computer than moved it to a remote server, the executable needs mysql libraries to work, the server has mysql but for some reason when execute the file it tells me "can't find libmysqlclient.so.16". is there a solution for this? is there a way to know where is tries to locate this file or give it another path? I can't compile it on the server since it has no compiler and I don't have root access to install packages last question is if in the sequence "./configure","make","make install" the "make install" command is the only one that actually puts files outside the directory in which these files reside? if for example the software will be installed in /usr/local/ is the "make install" command the only one that will require "sudo" before it? let me see if I got it correctly: "./configure" creates the Makefile according to the location of various files on your system. "make" compiles the source code according to this makefile. and "make install" moves the files to their appropriate location. I know this has been very long I thank anyone who had the patience to read my question :)

    Read the article

  • iMac invalid memory access and then crash?

    - by Sam McAfee
    My iMac G5 from a few years ago (the square white one) dies now, with a message about an invalid memory access. It goes straight to an Open Firmware prompt and doesn't even load the Mac OS at all. Am I correct in guessing that the memory is probably the issue, and that if I replace it, we may be able to boot again as normal? That's my gut feeling anyway, so I ordered some more memory and plan to swap it today. Any thoughts? Ideas?

    Read the article

  • Understanding DHCP setting for DNS Server Options and Scope Options

    - by Saariko
    I have installed 2 DC's on my network (W2K8 R2) both serve as a replicate DC on my domain. On one of them (DC1) there is also a DHCP server running. On both I have a DNS server running. I am trying to understand the difference in the settings within the DHCP of Server Options and Scope Options. As I understand it: On the server options, I should put an external DNS for system (lets say 8.8.8.8 - google) And on the scope options, I should put both my internal dc1 and dc2 IP's as the server. - which are than distributed to my domain clients. Is that correct? Is there a better way? Do I need to add loopback address as well?

    Read the article

  • Does it make sense to write a build scripts in C++?

    - by Klaim
    I'm using CMake to generate my projects IDE/makefiles, but I still need to call custom "scripts" to manipulate my compiled files or even generate code. In previous projects I've been using Python and it was OK, but now I'm having serious trouble managing a lot of dependencies in two very big projects I'm working on so I want to minimize the dependencies everywhere. Someone suggested to me to use C++ to write my build scripts instead of adding a language dependency just for that. The projects themeselves already use C++ so there are several advantages that I can see: to build the whole project, only a C++ compiler and CMake would be necessary, nothing else (all the other dependencies are C or C++); C++ type safety (when using modern C++) makes everything easier to get "correct"; it's also the language I know the better so I'm more at ease with it even if I'm able to write some good Python code; potential gain in execution speed (but i don't think it will really be perceptible); However, I think there might be some drawbacks and I'm not sure of the real impact as I didn't try yet: might be longer to write the code (that said I'm not sure because I'm efficient enough in C++ to write something that work quickly, so maybe for this system it wouldn't be so long to write) (compilation time shouldn't be a problem for this case); I must assume that all the text files I'll read as input are in UTF-8, I'm not sure it can be easilly checked at runtime in C++ and the language will not check it for you; libraries in C++ are harder to manage than in scripting languages; I lack experience and forsight so maybe I'm missing advantages and drawbacks. So the question is: does it make sense to use C++ for this? do you have experiences to report and do you see advantages and disadvantages that might be important?

    Read the article

  • How does eMail encryption work?

    - by Dummy Derp
    I have been going over YouTube watching videos on eMail encryption and everyone seems to explain it from a different perspective. Some do it for a CompTIA exam while others just provide a primer. Here is what I understood: Step1: You compose an email that you want to send. Without encryption, it will be simple ASCII text that will be visible to anyone along the way. Step2: You generate a digital signature to make sure that nobody gets to re-transmit your email and claim it was you. Digital Signature is generated using Sender's private key which is usually a hash of the password and is then combined with the original message to form one long hash string. These signatures are one-time-use-only and a new one is calculated for every email. Step 3: You encrypt the compose of your email using Receiver's public key so that the only person who can read it is the intended receiver using their private key Step 4: When you hit the send the email, what is transmitted now is gibberish to everyone apart from the intended receiver who will decrypt is using their private key And there are various ways to do it like PEM, PGP, etc. Correct me where I am wrong or refine where necessary.

    Read the article

  • Can't star SSH on Ubuntu 12.10 AWS EC2

    - by Conor H
    So i've just started playing around with Ubuntu on Amazon EC2. I've just issued the following command to restart ssh but it has now "killed" ssh. sudo /etc/init.d/ssh restart I can't seem to ssh to this instance anymore. Putty just gives me "connection refused". NOTE: In this case I just restarted SSH to see the result. I didn't change any settings. This was to confirm that it was the restart command was the problem and not any configs I made. What is the correct way to restart SSH? P.S. That usually works on other Ubuntu boxes. Thanks. EDIT: It is also worth noting that when I ran that command I was taken straight back to a prompt. I didn't get any output on the console.

    Read the article

  • Draw Rectangle To All Dimensions of Image

    - by opiop65
    I have some rudimentary collision code: public class Collision { static boolean isColliding = false; static Rectangle player; static Rectangle female; public static void collision(){ Rectangle player = Game.Playerbounds(); Rectangle female = Game.Femalebounds(); if(player.intersects(female)){ isColliding = true; }else{ isColliding = false; } } } And this is the rectangle code: public static Rectangle Playerbounds() { return(new Rectangle(posX, posY, 25, 25)); } public static Rectangle Femalebounds() { return(new Rectangle(femaleX, femaleY, 25, 25)); } My InputHandling class: public static void movePlayer(GameContainer gc, int delta){ Input input = gc.getInput(); if(input.isKeyDown(input.KEY_W)){ Game.posY -= walkSpeed * delta; walkUp = true; if(Collision.isColliding == true){ Game.posY += walkSpeed * delta; } } if(input.isKeyDown(input.KEY_S)){ Game.posY += walkSpeed * delta; walkDown = true; if(Collision.isColliding == true){ Game.posY -= walkSpeed * delta; } } if(input.isKeyDown(input.KEY_D)){ Game.posX += walkSpeed * delta; walkRight = true; if(Collision.isColliding == true){ Game.posX -= walkSpeed * delta; } } if(input.isKeyDown(input.KEY_A)){ Game.posX -= walkSpeed * delta; walkLeft = true; if(Collision.isColliding == true){ Game.posX += walkSpeed * delta; } } } The code works partially. Only the right and top side of the images collide. How do I correct the rectangle so it will draw on all sides? Thanks for any suggestions!

    Read the article

  • Wrong DNS for just one site on development machine

    - by Patrick
    www.superyoink.de is my clients' website. I can access it from any machine except my development one. If I ping it on my development machine, I get 80.67.28.107 - this is wrong. My laptop, next to me, is able to resolve it correctly. I have tried putting correct address into hosts like so: 93.187.232.191 www.superyoink.de Still resolves to wrong address. I can enter bogus DNS server addresses so nothing works. But www.superyoink.de still resolves to 80.67.28.107. Rebooted, did ipconfig /flushdns nothing seems to work. I run 32 bit Vista. My impression is that it has stored the wrong dns resolution somewhere and is not even trying the DNS servers. But where? Help please!

    Read the article

  • What can you do to decrease the number of live issues with applications?

    - by User Smith
    First off I have seen this post which is slightly similar to my question. : What can you do to decrease the number of deployment bugs of a live website? Let me layout the situation for you. The team of programmers that I belong to have metrics associated with our code. Over the last several months our errors in our live system have increased by a large amount. We require that our updates to applications be tested by at least one other programmer prior to going live. I personally am completely against this as I think that applications should be tested by end users as end users are much better testers than programmers, I am not against programmers testing, obviously programmers need to test code, but they are most of the times too close to the code. The reason I specify that I think end users should test in our scenario is due to the fact that we don't have business analysts, we just have programmers. I come from a background where BAs took care of all the testing once programmers checked off it was ready to go live. We do have a staging environment in place that is a clone of the live environment that we use to ensure that we don't have issues between development and live environments this does catch some bugs. We don't do end user testing really at all, I should say we don't really have anyone testing our code except programmers, which I think gets us into this mess (Ideally, we would have BAs or QA or professional testers test). We don't have a QA team or anything of that nature. We don't have test cases for our projects that are fully laid out. Ok, I am just a peon programmer at the bottom of the rung, but I am probably more tired of these issues than the managers complaining about them. So, I don't have the ability to tell them you are doing it all wrong.....I have tried gentle pushes in the correct direction. Any advice or suggestions on how to alleviate this issue is greatly appreciated. Thanks.

    Read the article

  • HTG Explains: What Are Computer Algorithms and How Do They Work?

    - by YatriTrivedi
      Unless you’re into math or programming, the word “algorithm” might be Greek to you, but it’s one of the building blocks of everything you’re using to read this article. Here’s a quick explanation of what they are, and how they work. Disclaimer: I’m not a math or computer science teacher, so not all of the terms I use are technical. That’s because I’m trying to explain everything in plain English for people aren’t quite comfortable with math. That being said, there is some math involved, and that’s unavoidable. Math geeks, feel free to correct or better explain in the comments, but please, keep it simple for the mathematically disinclined among us. Image by Ian Ruotsala Latest Features How-To Geek ETC How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Lucky Kid Gets Playable Angry Birds Cake [Video] See the Lord of the Rings Epic from the Perspective of Mordor [eBook] Smart Taskbar Is a Thumb Friendly Android Task Launcher Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic]

    Read the article

  • Poor Customer Service Example

    - by MightyZot
    Lately I have been frustrated by examples of poor customer service. At least one is worth writing about because I don’t think companies realize the effects of their service policies on loyal customers. Bad Customer Service Example #1 Recently, I received an offer in the mail from my cable company, suddenLink. The offer was for an updated TiVo for $12/mo. Normally I ignore offers like this one because I already have the service they’re offering and many times advertisers are offering alternatives to what is already an excellent product offering. I tend to exhibit a high level of loyalty to the products and brands that I use. In this case, we were looking to upgrade our TiVo and this deal is attractive for several reasons: I don’t want to pay a huge amount up-front for the device, so paying a monthly amount for the device is attractive to me. My entertainment is almost all on a single invoice. I’m no longer going to be billed by suddenLink and TiVo. TiVo is still involved, so I am still loyal to the brand I love. I have resisted moving to other DVRs and services for over a decade. I called suddenLink to order the new TiVo and was rewarded with great customer service. In fact, I can’t remember ever getting poor customer service from suddenLink. They are always there to answer my technical support questions and they are very responsive to outages. Then I called TiVo. First of all, I chose the option on the phone system to change or cancel my service, which was consequently met by an inordinate hold time. (I’m calling this time inordinate because I get through very quickly if I want to purchase something.) This is a trend that I’ve noticed with companies – if you want me to be loyal to you, it should be just as easy to cancel your service as it is to purchase it. Because, I should never be cancelling because I am unhappy. And, if you ever want my business again, or more importantly a reference, then you’d better make the exit door open just as easy as the enter door. After quite some time on hold, I talked to “Victor” who was very courteous. Victor canceled my service and then told me that I could keep my current TiVo and transfer recorded programs to it from the new TiVo.  Cool I said, but what about the cost?  He said there was no extra cost.  This was also attractive to me because I paid for my TiVo and it would be good to use it for something at least.  That was four months ago. This month I noticed that TiVo was still charging me for my original service. I was a little upset, but I decided to give them the benefit of the doubt. After all, I am a loyal TiVo customer and I have resisted moving to other solutions for over a decade. I’m sure they will do whatever it takes to keep my business, through TiVo or through suddenLink. After quite some time on hold, I was able to talk to a customer service representative, “Les”. I explained that I am a loyal TiVo customer, but I purchased this deal through my cable provider. I’m still with TiVo, I just wanted a single bill and to take advantage of the pay-over-time option. “Les” told me that he was very sorry to hear that I’m leaving TiVo, to which I responded again that I wasn’t leaving TiVo, I just want one invoice, and to take advantage of the pay-over-time. So, after explaining that I requested a termination of the non-suddenLink account (TiVo can see both of course), I was put on hold again for quite some time while my refund was “approved”.  “Les” said that he could see my cancellation request back in July. Note that it is now November, so they have billed me inappropriately four times. After quite some time, he came back on the line and told me that he was able to “get me most of my money back.” He got approval to refund 90 days. Even though I requested cancellation of one of my accounts, TiVo has that cancellation request on file and they admit overbilling me, I am going to get “most” of my money back. To top this experience off, when we were ready to hang up, “Les” told me that he was sorry to see me go and that he hoped I would come back to TiVo again. Again, I explained to “Les” that I have not left TiVo. I am just paying them through suddenLink. At that point, he went into a small dissertation about how this is a special arrangement they have with suddenLink and very few others. He made me feel like I was doing something wrong. Why should I feel that way? TiVo made the deal with suddenLink, not me, and the deal seemed like a good compromise for me to be able to get what I need. Here is what TiVo Customer Service accomplished on those two calls – I no longer feel like I need to be loyal to the TiVo brand or service. If I had been treated better on these two calls, I would still be recommending TiVo to my friends. They would still be getting revenue from a loyal customer, who paid the same rate for over a decade, and this article wouldn’t be here for you to read. Interesting… In my opinion, if you want brand loyalty, be loyal to your customers!

    Read the article

  • Where can I find a list of the highest resolution monitors for sale?

    - by speedmetal
    I am always on the lookout for the latest and greatest monitors, but for some reason, I have never found a good resource for this information. And while we know months in advance what new processor will be released, it doesn't seem like we ever know what new monitors will be released until they are. I am tempted to buy a Dell 3008WFP, but since the 30" 2560x1600 monitors have been out for 6 years, I would expect something better is about to be released. Where can I find out what is available in the high resolution / widescreen market and what is soon to be released? EDIT: I did finally find a resource for this information: Comprehensive List of IPS Based LCD Monitors However, if anyone finds another similar or possibly better resource, I will give it a correct answer mark. Thanks!

    Read the article

  • Connecting FreeNAS 8 to Mac OS X Lion LDAP Server

    - by Absolution
    I currently have Mac OS X Lion Server running from a MacMini and want to use it purely as an LDAP server for authentication for FreeNAS 8. I have FreeNAS setup and running on a VM, all features working correctly and as expected however I cannot connect to my LDAP server (MacMini). Error message; **Nss_ldap: could not search LDAP server – server is unavailable** For LDAP service settings in FreeNAS, I know my Hostname and Base DN are correct (exact copies of what I set originally and ones that are shown in server:open directory overview) however I am unsure what to enter for Root bind DN, password and Suffix’s. I have researched into where I can find these out and other than following the FreeNAS examples it appears there is a way to find out within the Server Workgroup Manager specific to my settings – however this function is unavailable to me and cannot be ‘ticked’ to view for some strange reason. Some forums explain how Root bind DN should be uid=admin, dc=… and others cn=admin, dc=… – I’m rather confused and would appreciate your help or advice with this.

    Read the article

  • Does RAID 1 protect against corruption?

    - by Shaun
    Does Raid 1 protect against data corruption? For example, let's say that I am keeping all of my important files on a NAS that uses 2 disks in a RAID 1. If one hard drive has some kind of internal problem and the data becomes corrupted, does the RAID recognize this automatically and correct it using data from the other good disk? Could it even know which copy is the good one? Does RAID 5 protect against corruption? I know that RAID is not a backup solution. I am trying to figure out how to make sure that I am not backing up corrupt data!

    Read the article

  • debian - running unattended-upgrades on a particular day of the week

    - by dastra
    We're running unattended-upgrades on debian squeeze, and would like it to run once a week, only on a Wednesday morning. To attempt this, we have set: APT::Periodic::Unattended-Upgrade "7" in /etc/apt/apt.conf.d/50unattended-upgrades And then touched the /var/lib/apt/periodic/update-stamp to set the timestamp to a Wednesday, for instance: touch -t 201211280000 /var/lib/apt/periodic/update-stamp Running: stamp=$(date --date=$(date -r /var/lib/apt/periodic/update-stamp --iso-8601) +%s 2/dev/null) date -u --date="1970-01-01 $stamp sec GMT" Gives the correct timestamp: Wed Nov 28 00:00:00 UTC 2012 However, unattended-upgrades then seems to ignore this, and run the updates on a Saturday morning. Could anyone enlighten me as to how this parameter works, and how to set up upgrades to run on a Wednesday?

    Read the article

  • Preserving smart playlist order on iPhone

    - by Doug Harris
    I have a smart playlist configured to play my favorite podcasts during my commute. It's a list of audio-only, unheard podcasts: In iTunes, I've ordered the playlist by Release Date -- i.e. I want to listen to the oldest podcast episodes first. The order is correct in iTunes but when I sync to my iPhone the order is jumbled. Currently, I have five episodes in this playlist -- ordered by release date they're A,B,C,D,E. On my iPhone, they're ordered B,D,C,E,A. There's no sort order that's obvious to me -- not sorted by name of episode, name of podcast, length of episode, date added, etc. Any solutions to ordering smart playlists on the iPhone? Update: This is likely an iTunes 9 bug. There's lots of discussion about this over on apple.com

    Read the article

  • Windows service running as network service - how does it authenticate? Breaking change in W2K8?

    - by Max
    A Windows service running as "Network Service" talks to services on other machines (here: SQL Server and Analysis Services), using Windows authentication. For authentication, we have to grant permissions to the machine account of the service. E.g. if service runs on server MYSERVER in domain MYDOMAIN, it'll authenticate itself as "MYDOMAIN\MYSERVER$". - Am I correct, so far? Now here's my question: does this still apply when talking to a service on the SAME machine? Or will it authenticate with something like "NT AUTHORITY\Network Service" instead when connecting to a local service? And: is there any chance this is a breaking change from Windows 2003 to Windows 2008? We're having an actual issue in our system where the account was able to connect to local services with only the machine account having permissions in W2K3. In W2K8, this doesn't seem to work anymore: authentication to local services now fails, but still works to remote machines.

    Read the article

  • Mandelbrot set not displaying properly

    - by brainydexter
    I am trying to render mandelbrot set using glsl. I'm not sure why its not rendering the correct shape. Does the mandelbrot calculation require values to be within a range for the (x,y) [ or (real, imag) ] ? Here is a screenshot: I render a quad as follows: float w2 = 6; float h2 = 5; glBegin(GL_QUADS); glVertex3f(-w2, h2, 0.0); glVertex3f(-w2, -h2, 0.0); glVertex3f(w2, -h2, 0.0); glVertex3f(w2, h2, 0.0); glEnd(); My vertex shader: varying vec3 Position; void main(void) { Position = gl_Vertex.xyz; gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } My fragment shader (where all the meat is): uniform float MAXITERATIONS; varying vec3 Position; void main (void) { float zoom = 1.0; float centerX = 0.0; float centerY = 0.0; float real = Position.x * zoom + centerX; float imag = Position.y * zoom + centerY; float r2 = 0.0; float iter; for(iter = 0.0; iter < MAXITERATIONS && r2 < 4.0; ++iter) { float tempreal = real; real = (tempreal * tempreal) + (imag * imag); imag = 2.0 * real * imag; r2 = (real * real) + (imag * imag); } vec3 color; if(r2 < 4.0) color = vec3(1.0); else color = vec3( iter / MAXITERATIONS ); gl_FragColor = vec4(color, 1.0); }

    Read the article

< Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >