Search Results

Search found 22839 results on 914 pages for 'decimal point'.

Page 129/914 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • OpenVPN Cloud for Network monitoring

    - by mezgani
    I'm working on a supervision project based on OpenVPN, a good way to send some network traffic through a secure channel to office from there out to the Internet. On office i have an OpenVPN server installed and i need to monitor all branches servers that are behind firewalls. I know that the point to point solution is very easy so we may only install OpenVPN client on node that i need to monitor. In the fact, is there any other issue that could help to supervise all branches DMZ network, without installing the client on each machines.

    Read the article

  • how to use iis 6.0 redirects?

    - by payling
    I'm attempting to use the below reference to create a re-direct for my local site with no luck. http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/6b855a7a-0884-4508-ba95-079f38c77017.mspx?mfr=true I want absolute links on my local site that point online to point to my local site instead. example absolute link [http://online.com/products] when I click the local version I'd like it to redirect to: [http://offline/products] I want to preserve everything after the domain name and append it to the server (local) name so that when I click a link it will redirect to the local site and not the online version. I've tried [http://offline$S] but that doesn't append the "suffix" /products the way I thought it should. What's going on here?

    Read the article

  • Having XP VM use my host OSX ssh tunnel to connect to a remote site?

    - by Manachi
    I am using Mac OSX and have Windows XP running on VMWare Fusion. I'm creating an ssh tunnel from OSX to a remote server, and then trying to have Windows XP use that tunnel (I actually use a program called Proxifier on XP to filter my XP MS SQL Server traffic through that tunnel) Note that I can successfully create an ssh tunnel (on port 9333) from the XP putty to the remote host, and have SQL Server Proxify through that tunnel and it all works correctly. However when I try to set up the tunnel in OSX, and have Proxifier in XP point to the OSX tunnel instead of localhost, it doesn't seem to connect. Here is the OSX command i'm using to create the tunnel: ssh -i /my/key -p 9001 -D 9333 -g me@remotehostname Then I set my XP proxifier to point to macosxhostname:9333 (instead of the previous localhost:9333 which worked corrently when using putty) Any suggestions on what I may have missed? My XP firewall is turned off while setting this up.

    Read the article

  • Glenn Fiedler's fixed timestep with fake threads

    - by kaoD
    I've implemented Glenn Fiedler's Fix Your Timestep! quite a few times in single-threaded games. Now I'm facing a different situation: I'm trying to do this in JavaScript. I know JS is single-threaded, but I plan on using requestAnimationFrame for the rendering part. This leaves me with two independent fake threads: simulation and rendering (I suppose requestAnimationFrame isn't really threaded, is it? I don't think so, it would BREAK JS.) Timing in these threads is independent too: dt for simulation and render is not the same. If I'm not mistaken, simulation should be up to Fiedler's while loop end. After the while loop, accumulator < dt so I'm left with some unspent time (dt) in the simulation thread. The problem comes in the draw/interpolation phase: const double alpha = accumulator / dt; State state = currentState*alpha + previousState * ( 1.0 - alpha ); render( state ); In my render callback, I have the current timestamp to which I can subtract the last-simulated-in-physics-timestamp to have a dt for the current frame. Should I just forget about this dt and draw using the physics thread's dt? It seems weird, since, well, I want to interpolate for the unspent time between simulation and render too, right? Of course, I want simulation and rendering to be completely independent, but I can't get around the fact that in Glenn's implementation the renderer produces time and the simulation consumes it in discrete dt sized chunks. A similar question was asked in Semi Fixed-timestep ported to javascript but the question doesn't really get to the point, and answers there point to removing physics from the render thread (which is what I'm trying to do) or just keeping physics in the render callback too (which is what I'm trying to avoid.)

    Read the article

  • Desktop icons disappears when Nautilus is launched, until next boot

    - by Santosh
    What happens: When I log in into my Ubuntu everything is normal, I have some icons on my desktop and I can see them at this point of time. As soon as I click on the explorer (nautilus) on the Launcher bar, everything goes (disappers) and never comes. No matter how many time you click on the launcher, you can't open nautilus. I tried opening nautilus from the terminal, get the following: santosh@santosh:~$ nautilus Initializing nautilus-gdu extension ** (nautilus:2158): DEBUG: SyncDaemon already running, initializing SyncdaemonDaemon object Initializing nautilus-open-terminal extension (nautilus:2158): Gtk-CRITICAL **: gtk_action_set_visible: assertion `GTK_IS_ACTION (action)' failed (nautilus:2158): Gtk-CRITICAL **: gtk_action_set_visible: assertion `GTK_IS_ACTION (action)' failed Segmentation fault I suspect on "Segmentation fault" on the last line, whats that? I was amazed when I run this command in sudo.: santosh@santosh:~$ sudo nautilus Initializing nautilus-gdu extension ** (nautilus:2216): DEBUG: Syncdaemon not running, waiting for it to start in NameOwnerChanged Initializing nautilus-open-terminal extension (nautilus:2216): Gtk-CRITICAL **: gtk_action_set_visible: assertion `GTK_IS_ACTION (action)' failed (nautilus:2216): Gtk-CRITICAL **: gtk_action_set_visible: assertion `GTK_IS_ACTION (action)' failed Nautilus-Share-Message: Called "net usershare info" but it failed: 'net usershare' returned error 255: net usershare: cannot open usershare directory /var/lib/samba/usershares. Error No such file or directory Please ask your system administrator to enable user sharing. As soon as I type sudo nautilus and hit enter, nautilus starts and the desktop background changes (to the default which ubuntu has). Don't know why but at this point as soon as I click on Desktop (from the left pane) then nautilus closes. Did anyone has same issue? I am corrently working with commandline to do my work and its a big pain. Additional Information: Another thing I have noticed that when I want to open the location of PDF file I am reading in document viewer (by clicking the folder icon). It gives error "Could not open the containing folder" and "Failed to execute child process "nautilus" (Permission denied)". Any idea?                

    Read the article

  • Re-open Word document to previous cursor location with identical page vertical position

    - by Malcolm
    I would like to return to my previous point of edit with the page vertically positioned identical to its original vertical position. The Shift+F5 technique returns me to the previous point of edit, but the page I return to is vertically positioned on the screen in a somewhat random manner. In other words, if my cursor is 300 vertical pixels from the top of the document viewport, I would like to re-open my page so that the location of the cursor is still 300 vertical pixels from the top of my viewport. The following can be used to determine the vertical position (on the screen) of my text cursor: ActiveWindow.GetPoint pLeft, pTop, pWidth, pHeight, Selection.Range So the challenge becomes how to scroll my document in such a manner as to return my text cursor to its original vertical position (pHeight)? There is no corresponding ActiveWindow.SetPoint and ActiveWindow.ScrollIntoView scrolls a selection range into view, but offers no control over the vertical position of the selection range on the screen.

    Read the article

  • www a-record vs cname-record

    - by Sorin Buturugeanu
    Hi! I have a website that I will be hosting DNS for (testing purposes at first and then it will have some limited traffic). I have set up DNS so that site.tld has an A record to the actual IP but I don't know what to do about www.site.tld. Both site.tld and www.site.tld will point to the same server / application so my logic tells me to add a cname record so that www.site.tld becomes an alias for site.tld, BUT, I've been checking my settings with intodns.com and if I only add a CNAME for the www.site.tld it gives me the following error: ERROR: I could not get any A records for www.cexa.ro! (the error clears once I do an A record for www.site.tld to point to the actual IP) I don't know if there is a "rule" that "www." should always be an A record even though it's actually pointing to the same IP / application. Thanks for helping me understand this!

    Read the article

  • How do I manage the technical debate over WCF vs. Web API?

    - by Saeed Neamati
    I'm managing a team of like 15 developers now, and we are stuck at a point on choosing the technology, where the team is broken into two completely opposite teams, debating over usage of WCF vs. Web API. Team A which supports usage of Web API, brings forward these reasons: Web API is just the modern way of writing services (Wikipedia) WCF is an overhead for HTTP. It's a solution for TCP, and Net Pipes, and other protocols WCF models are not POCO, because of [DataContract] & [DataMember] and those attributes SOAP is not as readable and handy as JSON SOAP is an overhead for network compared to JSON (transport over HTTP) No method overloading Team B which supports the usage of WCF, says: WCF supports multiple protocols (via configuration) WCF supports distributed transactions Many good examples and success stories exist for WCF (while Web API is still young) Duplex is excellent for two-way communication This debate is continuing, and I don't know what to do now. Personally, I think that we should use a tool only for its right place of usage. In other words, we'd better use Web API, if we want to expose a service over HTTP, but use WCF when it comes to TCP and Duplex. By searching the Internet, we can't get to a solid result. Many posts exist for supporting WCF, but on the contrary we also find people complaint about it. I know that the nature of this question might sound arguable, but we need some good hints to decide. We're stuck at a point where choosing a technology by chance might make us regret it later. We want to choose with open eyes. Our usage would be mostly for web, and we would expose our services over HTTP. In some cases (say 5 to 10 percent) we might need distributed transactions though. What should I do now? How do I manage this debate in a constructive way?

    Read the article

  • Calculating Utilization in a Stop-And-Wait Protocol

    - by AlanTuring
    So theres this question in my book and it doesn't state exactly how to go about actually calculating utilization anywhere, and i'm not being able to find any substantial information regarding everything i need to solve this question.(My mid term is next week). Anyway, here's the question: The distance from earth to a distant planet is approximately 9 × 10^10 m. What is the channel utilization if a stop-and-wait protocol is used for frame transmission on a 64 Mbps point-to-point link? Assume that the frame size is 32 KB and the speed of light is 3 × 10^8 m/s. Suppose a sliding window protocol is used instead. For what send window size will the link utilization be 100%? You may ignore the protocol processing times at the sender and the receiver. thanks to anyone who has any idea.

    Read the article

  • Converting 2D coordinates from multiple viewpoints into 3D coordinates

    - by Kirk Smith
    Here's the situation. I've got a set of 2D coordinates that specify a point on an image. These 2D coordinates relate to an event that happened in a 3D space (video game). I have 5 images with the same event point on it, so I have 5 sets of 2D coordinates for a single 3D coordinate. I've tried everything I can think to translate these 2D coordinates into 3D coordinates, but the math just escapes me. I have a good estimate of the coordinates from which each image was taken, they're not perfect but they're close. I tried simplifying this and opening up Cinema 4D, a 3D modeling application. I placed 5 cameras at the coordinates where the pictures were taken and lined up the pictures with the event points for each one and tried to find a link, but nothing was forthcoming. I know it's a math question, but like I said, I just can't get it. I took physics in high school 6 years ago, but we didn't deal with a whole lot of this sort of thing. Any help will be very much appreciated, I've been thinking on it for quite a while and I just can't come up with anything.

    Read the article

  • VPN - force a selective range of ip to run on VPN (linux)

    - by Francesco
    Preface: I know there are similar question here and there however I'm a kind of newbie on Net stuff so I need an answer on this specific scenario, hoping that can help others too as it is a common problem Let say I cannot do anything on the local switch to change the local ip range, I don't want to use any complicate trick as use VMachine to hide the local ip range but I want to use net tools to solve the issue. Scenario my local net assign me an IP of this class 192.168.1.xxx (ex. 192.168.1.116) and my VPN (VPNC) assign me IP of same class 192.168.1.xxx (ex. 192.168.1.247) Obviously I need VPN to access local address (ex. 192.168.1.100) but when I open any address of the class 192.168.1.xx the route point to my local net and not to the VPN ones. I'm on linux and i'd like gui solution (network manager) in case it is not possible let play with route command. here what network manager offer me: Here my actual route once connected to the VPN: Here some route information (route -n) Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 ppp0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 wlan0 182.71.21.106 192.168.1.1 255.255.255.255 UGH 0 0 0 wlan0 182.71.21.106 192.168.1.1 255.255.255.255 UGH 0 0 0 wlan0 192.168.1.0 0.0.0.0 255.255.255.0 U 9 0 0 wlan0 192.168.1.246 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 Here my ifconfig : ppp0 Link encap:Point-to-Point Protocol inet addr:192.168.1.247 P-t-P:192.168.1.246 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1400 Metric:1 RX packets:3415 errors:0 dropped:0 overruns:0 frame:0 TX packets:2525 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:3682328 (3.6 MB) TX bytes:402315 (402.3 KB) wlan0 Link encap:Ethernet HWaddr 4c:eb:42:06:a3:a6 inet addr:192.168.1.116 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::4eeb:42ff:fe06:a3a6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:72598 errors:0 dropped:0 overruns:0 frame:0 TX packets:42300 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:76000532 (76.0 MB) TX bytes:13919400 (13.9 MB) The Question So basically I would like to add a rule to force this particular address (192.168.1.100) on the VPN and not on my local net

    Read the article

  • Consolidate SQL Server Reporting Services

    - by Eric C. Singer
    I've been a big fan of consolidating as many DB's to a few SQL servers for a while and I've had great success with it. However, I've never had to deal with SQL reporting services. Has anyone migrated SSRS from a bunch of random SQL servers into a consolidated SQL server? I don't exactly know a whole lot about SSRS which is part of the problem. To my knowlege, it's one DB per SSRS instance, so it sounds like i'd need to find a way of exporting data and merging it. Basically the process used to look like: Move DB from SQL Express to shared SQL server Change point in APP to point at new SQL server With reporting services, how do I move the reporting service compenent of the DB as well? I realize I may need to tweak the app, but my question is on the SQL side.

    Read the article

  • Hosting DokuWiki on a Stick

    - by Rook
    I have a rather middlish DokuWiki on a stick project documentation which I've always used offline, since I was the only author which did documentation. Now the mentioned project is somewhat expanding, and I need a way and a place to host it somewhere. Can anyone recommend where such a place might be found? (few several beginner questions will follow now) Also, (at this point I might mention that I chose DokuWiki because of it "just works" attitude and because I didn't want to learn server adminstration and the like ...) I'm interested, how much are Wiki systems "compatible"? What I mean under compatible? As far as I understood DokuWiki saves data as text files. If I were to wish to convert it to some other Wiki system would it be possible? Or some other format more suitable for printing (at some point it will be necessary to convert a lot of documentation to a manual format for users). All advices and constructive approaches to this appreciated.

    Read the article

  • Choosing a web development framework?

    - by Bob
    So, I've sort of reached a point where I want to start developing a website. Originally, I planned to build said website using PHP and CodeIgniter, I'm familiar with both, but, truth be told, I'm not too fond of either. I find they just get rather messy, CodeIgniter helps somewhat, but no matter what, it seems that most PHP comes out more obfuscated than it has to be. Anyways, I've come to the point where I want to either use Python or Ruby. I'm familiar in both, though more so towards Python, but I've never done any web development in them. I'll take the necessary time to learn the frameworks (and further my knowledge in the language of my choosing), but I need to choose one. I don't like either language more than the other, they both have their benefits... However, since I've never done any web development with either language, I was hoping that you guys could give me some pointers. What are the available frameworks for each language? What do you recommend and why? Note: I've primarily looked into Rails and Django - but I'm still open to others. I'm looking for one that will work for just one (or maybe two) developers. It has to be fairly easy to learn (but I will take the time to learn it). Also, I'd like it to easily support clean code and agile development.

    Read the article

  • How can I create (or do I even need to create) an alias of a DNS MX record?

    - by AKWF
    I am in the process of moving my DNS records from Network Solutions to the Amazon Route 53 service. While I know and understand a little about the basic kinds of records, I am stumped on how to create the record that will point to the MX record on Network Solutions (if I'm even saying that right). On Network Solutions I have this: Mail Servers (MX Records) Note: Mail Servers are listed in rank order myapp.net Add Sub-Domain MXMailServer(Preference) TTL inbound.myapp.net.netsolmail.net.(10) 7200 Network Solutions E-mail I have read that the payload for an MX record state that it must point to an existing A record in the DNS. Yet in the example above, that inbound.myapp... record only has the words "Network Solutions E-mail" next to it. Our email is hosted at Network Solutions. I have already created the CNAME records that look like this: mail.myapp.net 7200 mail.mycarparts.net.netsolmail.net. smtp.myapp.net 7100 smtp.mycarparts.net.netsolmail.net. Since I am only using Amazon as the DNS, do I even need to do anything with that MX record? I appreciate your help, I googled and researched this before I posted, this is my first post on webmasters although I've been on SO for a few years.

    Read the article

  • Wired network on computer to wifi

    - by user329592
    I just got myself a wifi capable cell phone, but I dont have a wireless internet at home. I do have a wired unlimited internet connection on my computer, and I wonder whether there is any gadget that I can plug into my computer (maybe at the usb port?) with which I can turn my computer into a wifi access point? I mean, a dongle or something which will enable me to connect my phone to my computer's internet through wi-fi? Also, I dont know anything about networking, so would it be hard for me to set up a secure wifi point? Thank you for reading this question through. Hope I can buy some sort of adapter which is comparitively cheap.

    Read the article

  • Are Get-Set methods a violation of Encapsulation?

    - by Dipan Mehta
    In an Object oriented framework, one believes there must be strict encapsulation. Hence, internal variables are not to be exposed to outside applications. But in many codebases, we see tons of get/set methods which essentially open a formal window to modify internal variables that were originally intended to be strictly prohibited. Isn't it a clear violation of encapsulation? How broadly such a practice is seen and what to do about it? EDIT: I have seen some discussions where there are two opinions in extreme: on one hand people believe that because get/set interface is used to modify any parameter, it does qualifies not be violating encapsulation. On the other hand, there are people who believe it is does violate. Here is my point. Take a case of UDP server, with methods - get_URL(), set_URL(). The URL (to listen to) property is quite a parameter that application needs to be supplied and modified. However, in the same case, if the property like get_byte_buffer_length() and set_byte_buffer_length(), clearly points to values which are quite internal. Won't it imply that it does violate the encapsulation? In fact, even get_byte_buffer_length() which otherwise doesn't modify the object, still misses the point of encapsulation, because, certainly there is an App which knows i have an internal buffer! Tomorrow, if the internal buffer is replaced by something like a *packet_list* the method goes dysfunctional. Is there a universal yes/no towards get set method? Is there any strong guideline that tell programmers (specially the junior ones) as to when does it violate encapsulation and when does it not?

    Read the article

  • Extreme Optimization – Curves (Function Mapping) Part 1

    - by JoshReuben
    Overview ·        a curve is a functional map relationship between two factors (i.e. a function - However, the word function is a reserved word). ·        You can use the EO API to create common types of functions, find zeroes and calculate derivatives - currently supports constants, lines, quadratic curves, polynomials and Chebyshev approximations. ·        A function basis is a set of functions that can be combined to form a particular class of functions.   The Curve class ·        the abstract base class from which all other curve classes are derived – it provides the following methods: ·        ValueAt(Double) - evaluates the curve at a specific point. ·        SlopeAt(Double) - evaluates the derivative ·        Integral(Double, Double) - evaluates the definite integral over a specified interval. ·        TangentAt(Double) - returns a Line curve that is the tangent to the curve at a specific point. ·        FindRoots() - attempts to find all the roots or zeroes of the curve. ·        A particular type of curve is defined by a Parameters property, of type ParameterCollection   The GeneralCurve class ·        defines a curve whose value and, optionally, derivative and integrals, are calculated using arbitrary methods. A general curve has no parameters. ·        Constructor params:  RealFunction delegates – 1 for the function, and optionally another 2 for the derivative and integral ·        If no derivative  or integral function is supplied, they are calculated via the NumericalDifferentiation  and AdaptiveIntegrator classes in the Extreme.Mathematics.Calculus namespace. // the function is 1/(1+x^2) private double f(double x) {     return 1 / (1 + x*x); }   // Its derivative is -2x/(1+x^2)^2 private double df(double x) {     double y = 1 + x*x;     return -2*x* / (y*y); }   // The integral of f is Arctan(x), which is available from the Math class. var c1 = new GeneralCurve (new RealFunction(f), new RealFunction(df), new RealFunction(System.Math.Atan)); // Find the tangent to this curve at x=1 (the Line class is derived from Curve) Line l1 = c1.TangentAt(1);

    Read the article

  • Is meta description still relevant?

    - by Jeff Atwood
    I received this bit of advice about the meta description tag recently: Meta descriptions are used by Google probably 80% of the time for the snippet. They don’t help with rankings but you should probably use them. You could just auto generate them from the first part of the question. The description tag exists in the header, like so: <meta name="Description" content="A brief summary of the content on the page."> I'm not sure why we would need this field, as Google seems perfectly capable of showing the relevant search terms in context in the search result pages, like so (I searched for c# list performance): In other words, where would a meta description summary improve these results? We want the page to show context around the actual search hits, not a random summary we inserted! Google Webmaster Central has this advice: For some sites, like news media sources, generating an accurate and unique description for each page is easy: since each article is hand-written, it takes minimal effort to also add a one-sentence description. For larger database-driven sites, like product aggregators, hand-written descriptions are more difficult. In the latter case, though, programmatic generation of the descriptions can be appropriate and is encouraged -- just make sure that your descriptions are not "spammy." Good descriptions are human-readable and diverse, as we talked about in the first point above. The page-specific data we mentioned in the second point is a good candidate for programmatic generation. I'm struggling to think of any scenario when I would want the Google-generated summary, that is, actual context from the page for the search terms, to be replaced by a hard-coded meta description summary of the question itself.

    Read the article

  • Ubuntu: The Movie

    - by CYREX
    Since Ubuntu is the most popular distribution and has made a lot of changes in many places around the globe and in different industries up to the point where even people that do not know what Linux is, they know what Ubuntu is (go figure? ) there might be a movie coming someday (like the social network for Facebook or Revolution OS for Linux/Red Hat) i wanted to know how it all came to be from the actual players in the show. UBUNTU: The Movie Since i have seen several of the primary characters of the movie here, this might be a good place to start on how it all came to be. Not in the traditional wikipedia way or the ubuntu help section, but in the what the actual developers have in mind on how it all went down to the point of having a huge amount of users, an incredible level sophistication in the forum, help sections, installers, etc.. This is just to have the KNOW HOW before the actual movie makes it out some day in the future. As a fan of Ubuntu this is a MOST KNOW! ;) Hope i made some people happy and some other shy hehe.

    Read the article

  • Perl not working with Nginx via fastcgi, cannot decipher error logs

    - by ProfessionalAmateur
    Im running CentOS 6.2, Nginx 1.2.3 following these Linode Instructions to get Perl to work with Nginx I've done everything upto the point of testing an actual Perl file. When I do this the browser says: The page you are looking for is temporarily unavailable. Please try again later. And my Nginx error-log shows the following: 2012/09/02 22:09:58 [error] 20772#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.102, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:8999", host: "192.168.1.10:81" Im stuck at this point. Im not sure if it matters but I also have spawn-fcgi and php-fpm to serve up PHP files on this site, but that should be 100% seperate from the perl-fastcgi setup, different port, etc.. How can I troubleshoot this?

    Read the article

  • What to think about when designing a simple GUI for a quiz game

    - by PeterK
    I am coming close to finish my first iPhone game ever, as a matter of fact also my first programming experience ever, which is a quiz game. I have all the functionality i want and is currently polishing it both from a code point of view as well as looking at the GUI. My initial idea was not to use any specific graphics but rather focus on the game experience and simplicity and by that only using background color, orange, and white text as well as buttons. The design is based on that all ages, from learning to read, should be able to host and play this game. However, as i am now getting close to the finish line i am starting to think what is needed from a GUI point of view. I would like to ask for some advice what to think about when designing a GUI. Is it considered OK without any 'fancy' graphics, what is the risk without it etc.? Also, what colors goes well together if i choose to use a simple GUI. I am thinking about color blindness etc. In other words how do i design a good and effective GUI for a simple game as mine? Thanks

    Read the article

  • What causes bad performance in consumer apps?

    - by Crashworks
    My Comcast DVR takes at least three seconds to respond to every remote control keypress, making the simple task of watching television into a frustrating button-mashing experience. My iPhone takes at least fifteen seconds to display text messages and crashes ¼ of the times I try to bring up the iPad app; simply receiving and reading an email often takes well over a minute. Even the navcom in my car has mushy and unresponsive controls, often swallowing successive inputs if I make them less than a few seconds apart. These are all fixed-hardware end-consumer appliances for which usability should be paramount, and yet they all fail at basic responsiveness and latency. Their software is just too slow. What's behind this? Is it a technical problem, or a social one? Who or what is responsible? Is it because these were all written in managed, garbage-collected languages rather than native code? Is it the individual programmers who wrote the software for these devices? In all of these cases the app developers knew exactly what hardware platform they were targeting and what its capabilities were; did they not take it into account? Is it the guy who goes around repeating "optimization is the root of all evil," did he lead them astray? Was it a mentality of "oh it's just an additional 100ms" each time until all those milliseconds add up to minutes? Is it my fault, for having bought these products in the first place? This is a subjective question, with no single answer, but I'm often frustrated to see so many answers here saying "oh, don't worry about code speed, performance doesn't matter" when clearly at some point it does matter for the end-user who gets stuck with a slow, unresponsive, awful experience. So, at what point did things go wrong for these products? What can we as programmers do to avoid inflicting this pain on our own customers?

    Read the article

  • nameservers one domain one VPS and one VMCloud

    - by Dave
    I 'had' just a single VPS with nameservers ns1 & ns2.mydomain.com. I've now taken a VMCloud package as I need space etc. Ultimately I will carefully transfer all accounts from my VPS to Cloud but in the short term I'm running both. In the new WHM on Cloud I have set hostname bla.domian nameservers as ns3 & ns4.mydomain.com with the two new ip addresses. Question is do I need to do anything else - eg where mydomain.com is registered? I want ns1 & 2 point to VPS and ns3 & 4 to point to Cloud.

    Read the article

  • Problem installing Ubuntu 10.04 64 bit side by side with Vista by using a bootable USB drive. What n

    - by Adam Siddhi
    What happened I decided to install Ubuntu 10.04 64 bit side by side with Vista Home Premium (I guess on another partition) with a USB stick. I found instructions on how to do this here: https://help.ubuntu.com/community/Installation/FromUSBStick To create the bootable USB drive I had to download a program called Unetbootin. That process was simple enough. All I had to do was just choose the disk image option, select the ubuntu-10.04-desktop-amd64.iso image, make sure it recognizes my USB drive and then press OK. It takes only like a few minutes to create a working bootable USB drive. Then I have to restart my computer, enter the BIOS, select my USB drive as the first boot drive, save options and continue with booting up. After this Ubuntu actually loads up. I think this is known as the Live version of Ubuntu so you can try it out before fully installing it. Any ways, on the Ubuntu 10.04 desktop I saw an installer. I click it and begin the installation process. Just so you know, I tried installing it 2 times. I will explain what happened each time: The first time I tried installing Ubuntu 10.04 I got stuck at step 4 of 7. I remember selecting the last option in the window which was Specify Partitions Manually (Advanced) I made my partition for Ubuntu like 52 gigs. I clicked forward and a little pop up window appeared saying Please Wait. So the installation process stalled on this window so I closed out of it and quit the installation process. So at this point I was worried because I had already selected the partition size and assumed it started making it. Since it stalled I had to quit out though. Anyways, once again I reached step 4 of 7 a decided to select the first option which is Install them side by side choosing between them each startup. I figured this was the safe way to go. I did that and the pop up window saying Please Wait popped up again but lasted only like 10 seconds. Then I got to I guess step 6 where it asks you to enter your desired name and password. Did that and clicked forward. The Ubuntu 10.04 installation load screen appeared and the loading bar at the bottom started filling up. So I got to 83% and stalled during the Importing other profile information (I think it was called this. I had the option to do this during I think step 6) process. So at this point I decided to get stop the installation process. I was getting very nervous. I tried to restart the computer but all that happened was that Ubuntu restarted. I finally got the computer to restart. I was pretty sure I had screwed something up big time by this point. As my computer was restarting I entered BIOS again and switched back to it booting from my main hard drive containing Vista. Saved it and continued the boot process. My worst fears were confirmed as Vista would not boot up. I mean I saw the little Microsoft Windows choppy animated green loading bar at the bottom of the screen and then boom! It decided to restart. When it restarted I had the option to run a memory test check to see if there was anything that needed to be repaired. That took like 20 minutes and at the end I saw that I did indeed have to repair something. I had to go through 2 repair processes. After each I had to restart the computer. The 2nd time it went through the repair process it said that it could not fully repair the damage. I was scared and restarted but Vista did load up. I got to my desktop and saw a message saying something like Repairs have been made, Please restart for changes to take effect I noticed that some Notification icons were missing and I could not hear volume in a video. Things were a bit funky. So I did restart and here I am. Now what?! So since I got back into Vista and thankfully have a working Internet connection I am trying to find answers to my problem (that is why I am writing this post). I am scared that I have partioned my hard drive 2 times after researching Installing Ubuntu 10.04 and seeing this post http://techie-buzz.com/foss/ubuntu-10-04-lts-installation-guide.html The author shows screen shots of installing Ubuntu 10.04. He shows the image of step 4 of 7 with a caption at the bottom. I will recreate it below: Select a partitioning option. Unless you want to format all the hard drive and install Ubuntu afresh, select the last option and proceed. Questions If I have indeed partitioned my HD 2 times (which I am sure it is), how do I get to a point where I can see all my bad, unfinished Ubuntu partitions and get rid of them? How do I clean this big mess up? & How can I ensure that this mess will not happen next time I try installing Ubuntu 10.04? Thank you Adam

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >