Search Results

Search found 17972 results on 719 pages for 'always on'.

Page 44/719 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Is Joel Test really a good gauging tool?

    - by henry
    I just learned about Joel Test. I have been computer programmer for 22 years, but somehow never heard about it before. I consider my best job so far to be this small investment managing company with 30 employees and only 3 people in IT department. I am no longer with them but I had being working there for 5 years – my longest streak with any given company. To my surprise they scored extremely poor on Joel Test. The only two questions I would answer “yes” are #4: Do you have a bug database? And #9: Do you use the best tools money can buy? Everything else is either “sometimes” or straight “no”. Here is what I liked about the company however: a) Good pay, they bragged about it to my face and I bragged about it to their face, so it was almost like a family environment. b) I always knew big picture. When writing a code to solve particular problem there were no ambiguity about the business nature of that problem. Even though we did not always had written specs we could ask business users a question anytime, often yelling it across the floor. I could even talk to executives any time I felt like doing it: no appointment necessary. c) Immediate feedback. Once we implement a solution and make business users happy they immediately let us know that, we (programmers) become heroes of the moment. d) No red tape. I could always buy any tools I deem necessary, and design solutions the way my professional judgment dictates. e) Flexibility. If I had mid-day dental appointment that is near my house rather than near the office, I would send email to the company: "FYI: I work from home today". As long as one of 3 IT guys was on the floor (to help traders in case their monitors go dark) they did not care where 2 others are. So the question thus becomes how valuable Joel Test is? Why bother with it?

    Read the article

  • How to become an expert web-developer?

    - by John Smith
    I am currently a Junior PHP developer and I really LOVE it, I love internet from first time I got into it, I always loved smartly-created websites, always was wondering how it all works, always admired websites with good design and rich functionality, and finally I am creating web-sites on my own and it feels really great. My goals are to become expert web-developer (aiming for creating websites for small and medium business, not enterprise-sized systems), to have a great full-time job, to do freelance and to create my own startup in future. General question: What do I do to be an expert, professional and demanded web-programmer? More concrete questions: 1). How do I choose languages and technologies needed? I know that every web-developer must know HTML+CSS+JS+AJAX+JQuery, I am doing some design aswell cause I like it and I need it for freelance also. But what about backend languages? Currently I picked PHP cause it's most demanded in my area and most of web uses it, but what would happen in future? Say, in 3 years, I am good at PHP and PHP frameworks by than, but what if some other languages get most popular? Do I switch to them? I know that good programmer is not about languages and frameworks but about ability to learn and to aim the goals, but still I think that learning frameworks for some language can take quite some time. Am I wrong? 2). In general, what are basic guidelines to be expert web-developer? What are most important things I should focus on? Thank you!

    Read the article

  • Access losing db connections

    - by Dwight T
    I have a weird problem going on at work. People have been using MS Access to connect to sql server db and lately people are getting sporadic problems with connecting to the servers. It's not always the same users and it's not always a problem, which makes it a real pain to try to solve. One example of a related problem. A person has a linked table to a table and she would filter the table or write a query on the table to return where itemsku = 'ABCD1234'. It would return one record but the ItemSku LMKN7486 and everytime it would return the wrong record but consistently the wrong record so itemsku abcd1234 always returned LMNK7486 One would think it might be a driver problem, but it also could be a user problem. Just posting the question to see if anyone else has had similar senerios. Thanks

    Read the article

  • Huawei E3276 LTE uplink slow in the routing Ubuntu, but not with other devices in the LAN

    - by Mytomi
    I have a Huawei E3276 LTE dongle (12d1:14fe - 12d1:1506) and a problem with the upstream speed. The problem is not only present with Ubuntu 14.04 LTS (64 bit workstation, kernel 3.16), but also with Raspbian Jessie for Raspberry PI (kernel 3.14). Upstream seems to be always limited to 5 Mbit/s whenever I check the speed from the Linux computer that I use as a LTE router. The other computers in the LAN always get about 10-15 Mbit/s upstream, even though the traffic is routed through the same Linux computer suffering from seemingly capped uplink. Downstream speed is always fine, 25 Mbit/s. I even installed Windows 7 in the same computer as Ubuntu and the speeds are 25 Mbit/s down, 15 Mbit/s up. So the problem is not with E3276 device itself or in the mobile subscription, but in the Huawei E3276 Linux compatibility. Maybe something in the kernel? I have made sure that the matter is not with iptables rules: the speed does not noticeably increase when iptables is disabled. Turning off IPv4 forwarding does not improve speed either. I'm not sure what settings and logs do help in debugging the situation. Please ask for more details, if you have a clue what might be wrong. Thanks, Mytomi

    Read the article

  • How to join video files from terminal?

    - by Leon Vitanos
    I have tried avidemux2_cli, mencoder, ffmpeg, cat.. But this doesn't always work (With the most of the times the error is that the audio codec is not the same) Maybe i put wrong options in the commands. So the commands: cat Sample.avi rrr.avi > complete.avi ffmpeg -i Sample.avi -i output.avi -vcodec copy -acodec copy complete.avi mencoder -ovc lavc -oac copy Sample.avi rrr.avi -o complete.avi avidemux2_cli --audio-codec copy --video-codec copy --output-format avi --load Sample.avi -append output.avi --save video.avi The cat problem is that it doesn't show error but it doesn't work always..Like the complete.avi will be exactly the same with Sample.avi Fmmpeg does nothing. The complete.avi is always the same with Sample.avi Mencoder error: All files must have identical audio codec and format for -oac copy. So the complete.avi is the same with Sample.avi avidemux2_cli there is no error but the complete.avi is again the same with Sample.avi.. So to sum up, all complete.avi are the same with Sample.avi.. And the problem is that they don't have the same audio codec ( i quess ).. Any ideas?

    Read the article

  • Warnings When Undo Isn't Possible

    - by ultan o'broin
    Enjoyed this post Never Use a Warning When you Mean Undo by Aza Raskin. It makes sense never to warn users if an undo option is possible. The examples given are from the web space. Here's the conclusion: Warnings cause us to lose our work, to mistrust our computers, and to blame ourselves. A simple but foolproof design methodology solves the problem: "Never use a warning when you mean undo." And when a user is deleting their work, you always mean undo. However, in enterprise apps you may find that an undo option isn't technically possible or desirable. Objects may be shared, part of a flow elsewhere, or undoing something committed to the database (a rollback I guess) may not be feasible if it becomes locked by another process. Plus, what constitutes user ownership of objects isn't always clear to users. The implications of delete (and other) actions need to be clearly communicated out in advance. Really, warnings are important in the enterprise space. Data has a very high value, and users can perform a wide variety of actions that may risk that data, not always within the application itself (at browser level, for example). That said, throwing warnings all over the place when an undo option is possible is annoying. Instead, treat warnings with respect. When there is no undo option possible, use warning messages to communicate potentially dangerous or irrecoverable actions or the downstream consequences of user actions on the process or task flow. Force the user to respond to a warning message by using a modal dialog with clearly labeled action buttons. Here's a couple of examples. A great article that got me thinking. Let's see more like that. Let's not forget there's more types of messages than just error messages. User assistance and user experience professionals need to understand when best to use confirmation, information, and warning types too!

    Read the article

  • SQL SERVER – Using MAXDOP 1 for Single Processor Query – SQL in Sixty Seconds #008 – Video

    - by pinaldave
    Today’s SQL in Sixty Seconds video is inspired from my presentation at TechEd India 2012 on Speed up! – Parallel Processes and Unparalleled Performance. There are always special cases when it is about SQL Server. There are always few queries which gives optimal performance when they are executed on single processor and there are always queries which gives optimal performance when they are executed on multiple processors. I will be presenting the how to identify such queries as well what are the best practices related to the same. In this quick video I am going to demonstrate if the query is giving optimal performance when running on single CPU how one can restrict queries to single CPU by using hint OPTION (MAXDOP 1). More on Errors: Difference Temp Table and Table Variable – Effect of Transaction Effect of TRANSACTION on Local Variable – After ROLLBACK and After COMMIT Debate – Table Variables vs Temporary Tables – Quiz – Puzzle – 13 of 31 I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Video

    Read the article

  • Is Gadget on Windows 7 buggy for now?

    - by Jian Lin
    Is Gadget on Windows 7 buggy for now? For example, I made CPU Meter, Clock "always on top", but sometimes, one is on top but the other is not. I have to make it "not always on top" and then "always on top" again and it will show up on top again. another thing that might or might not be a bug is: even when viewing video in full screen mode, the gadgets are all there. This may be one case where the user won't want all the gadgets on top when viewing a video full screen.

    Read the article

  • In Scrum, should you split up the backlog in a functional backlog and a technical backlog or not?

    - by Patrick
    In our Scrum teams we use a backlog, which mostly contains functional topics, but also sometimes contains technical topics. The advantage of having 1 backlog is that it becomes easy to choose the topics for the next sprint, but I have some questions: First, to me it seems more logical to have a separate technical backlog, where developers themselves can add pure technical items, like: we could improve performance in this method, this class lacks some technical documentation, ... By having one backlog, all developers always have to pass via the product owner to have their topics added to the backlog, which seems additional, unnecessary work for the product owner. Second, if you have a product owner that only focuses on the pure-functional items, the pure-technical items (like missing technical documentation, code that erodes and should be refactored, classes that always give problems during debugging because they don't have a stable foundation and should be refactored, ...) always end up at the end of the list because "they don't serve the customer directly". By having a separate technical backlog, and time reserved in every sprint for these pure technical items, we can improve the applications functionally, but also keep them healthy inside. What is the best approach? One backlog or two?

    Read the article

  • NDC Oslo

    - by Alan Smith
    Originally posted on: http://geekswithblogs.net/asmith/archive/2013/06/14/153136.aspx2013 has been a hectic year for conference presentations so far, NDC in Oslo has been the 6th conference I have attended, and my session there was my 11th conference presentation this year. I have been meaning to make the short trip over from Stockholm to NDC for a few years, and this was the first time I made it. I have heard a lot of great things about the event, and was impressed with the location, the sessions, and most of all the atmosphere around the event boots and during the party on Thursday evening. The session I was delivering was my “Grid Computing with 256 Windows Azure Worker Roles & Kinect” demo, which I have delivered at many events over the past 12 months. The demo went fine. I’m always a little nervous when I try to scale out the application to 256 worker roles, it almost always works well and the application will scale in minutes, but very occasionally there can be a longer delay due to the provisioning process in the Windows Azure data centers. This would not be an issue for many scenarios, but when standing on stage in front of a room full of developers you really want things to run smoothly. A number of people have suggested that I should pre-provision an environment so that it is guaranteed to be there when I run the demo during a session. For me the aim has always been to show the rapid scalability on cloud-based platforms live on stage. Pre-provisioning an environment may make for a more reliable demo but to me that would be cheating, and not half as much fun!

    Read the article

  • How to host a scalable social networking app

    - by christopher-mccann
    I am in the middle of developing a social networking application for a very select user niche which could scale to a few million users. Right now I have always hosted applications on RackSpace Cloud and I have no issues with them at all - always been a really good service and never had any downtime. My question is though does anyone think that cloud computing is not the way to host scalable web apps? Or can anyone with experience of this recommend a better solution. I have always shunned trying to run big servers from my own facilities as I think it seems silly to go to the expense of bringing in big alternative power supplies and all the other necessary precautions when other companies already do this. I looked at managed hosting services but this proved to be a bit too expensive for us at the start and the scalability of it wasnt good enough - it would take a day or two to get a new server provisioned. Therefore I ended up on a cloud platform. If anyone has any recommendations or advice it would be greatly appreciated.

    Read the article

  • Working with a company as a Junior Developer [closed]

    - by user1601973
    We all have started our careers in some way or other. Well, I am a college student based in North America & I am doing my second internship with the same company with which I did my first internship. I came back here because people here were helpful always and supportive. But it just happened today, and I wanted to share this on SO. Well since I started I have been doing documentation and that kind of stuff only as compared to my first internship in which I actually worked along with the developers & learned so many things. Well, I was in a conversation with my Team lead, and he asked me if I completed that particular work or not? Well, That particular work had slipped from my mind. He was indeed I kind of pissed, and said "You don't have to worry about it, I will figure out". Well, I felt so bad and was about to literally cry. I stopped my lunch and then went on to complete that work. I always ask for work in office, and I always try to be an asset for whoever I am working but this was the first time that it happened. What are your thoughts on this and should I apologise or not? I think I should.

    Read the article

  • change default userid for connecting to local AFP share?

    - by Stew
    I've got Netatalk & Avahi running on a local Ubuntu server--I use two different userids, "afp" for Time Machine and "stew" to access my media files etc. In order to mount a shared directory on my server, I have to click "Connect As..." and enter my userid/password every time, because it always tries to log in using Time Machine's userid. I'm not sure if this is because that userid is set as default, or just because it's the last userid that logged in to that server--either way: Is there a way to change the default userid for connecting to a given server? Mega extra credit: I'd love to have this automated, such that my userid, "stew", is always logged in (and heck, it'd be great to have the directories always mounted, too!) whenever the server is available. Thanks!

    Read the article

  • How to automatically render all opaque meshes with a specific shader?

    - by dsilva.vinicius
    I have a specular outline shader that I want to be used on all opaque meshes of the scene whenever a specific camera renders. The shader is working properly when it is manually applied to some material. The shader is as follows: Shader "Custom/Outline" { Properties { _Color ("Main Color", Color) = (.5,.5,.5,1) _OutlineColor ("Outline Color", Color) = (1,0.5,0,1) _Outline ("Outline width", Range (0.0, 0.1)) = .05 _SpecColor ("Specular Color", Color) = (0.5, 0.5, 0.5, 1) _Shininess ("Shininess", Range (0.03, 1)) = 0.078125 _MainTex ("Base (RGB) Gloss (A)", 2D) = "white" {} } SubShader { Tags { "Queue"="Overlay" "RenderType"="Opaque" } Pass { Name "OUTLINE" Tags { "LightMode" = "Always" } Cull Off ZWrite Off // Uncomment to show outline always. //ZTest Always CGPROGRAM #pragma target 3.0 #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" struct appdata { float4 vertex : POSITION; float3 normal : NORMAL; }; struct v2f { float4 pos : POSITION; float4 color : COLOR; }; float _Outline; float4 _OutlineColor; v2f vert(appdata v) { // just make a copy of incoming vertex data but scaled according to normal direction v2f o; o.pos = mul(UNITY_MATRIX_MVP, v.vertex); float3 norm = mul ((float3x3)UNITY_MATRIX_IT_MV, v.normal); float2 offset = TransformViewToProjection(norm.xy); o.pos.xy += offset * o.pos.z * _Outline; o.color = _OutlineColor; return o; } float4 frag(v2f fromVert) : COLOR { return fromVert.color; } ENDCG } UsePass "Specular/FORWARD" } FallBack "Specular" } The camera used fot the effect has just a script component which setups the shader replacement: using UnityEngine; using System.Collections; public class DetectiveEffect : MonoBehaviour { public Shader EffectShader; // Use this for initialization void Start () { this.camera.SetReplacementShader(EffectShader, "RenderType=Opaque"); } // Update is called once per frame void Update () { } } Unfortunately, whenever I use this camera I just see the background color. Any ideas?

    Read the article

  • Prevent chrome from auto-hiding bookmark bar on google search

    - by Jeromy Anglim
    I am using Chrome 28.0.1500.20 beta on OSX 10.7. In this version, when you perform a google search using the address bar the bookmarks bar disappears. Toggling "always show bookmarks bar" does not restore the bookmark (see screenshot below). I want to always show the bookmark bar. In particular, I often use a bookmarklet that converts a current google search into a Google Scholar search. Is there a way to always show the bookmark bar even when performing a Google search in Chrome?

    Read the article

  • Programming habits, patterns, and standards that have developed out of appeal to tradition/by mistake? [closed]

    - by user828584
    Being self-taught, the vast majority of what I know about programming has come from reading other peoples' code on websites like this. I'm starting to wonder if I've developed bad or otherwise pointless habits from other people, or even just made invalid assumptions. For example, in javascript, void 0 is used in a lot of places, and until I saw this, I just assumed it was necessary and that 0 had some significance. Also, the http header, referer is misspelled but hasn't been changed because it would break a lot of applications. Also mentioned in Code Complete 2: The architecture should describe the motivations for all major decisions. Be wary of “we’ve always done it that way” justifications. One story goes that Beth wanted to cook a pot roast according to an award-winning pot roast recipe handed down in her husband’s family. Her husband, Abdul, said that his mother had taught him to sprinkle it with salt and pepper, cut both ends off, put it in the pan, cover it, and cook it. Beth asked, “Why do you cut both ends off?” Abdul said, “I don’t know. I’ve always done it that way. Let me ask my mother.” He called her, and she said, “I don’t know. I’ve always done it that way. Let me ask your grandmother.” She called his grandmother, who said, “I don’t know why you do it that way. I did it that way because it was too big to fit in my pan.” What are some other examples of this?

    Read the article

  • Am I the only one this anal / obsessive about code? [closed]

    - by Chris
    While writing a shared lock class for sql server for a web app tonight, I found myself writing in the code style below as I always do: private bool acquired; private bool disposed; private TimeSpan timeout; private string connectionString; private Guid instance = Guid.NewGuid(); private Thread autoRenewThread; Basically, whenever I'm declaring a group of variables or writing a sql statement or any coding activity involving multiple related lines, I always try to arrange them where possible so that they form a bell curve (imagine rotating the text 90deg CCW). As an example of something that peeves the hell out of me, consider the following alternative: private bool acquired; private bool disposed; private string connectionString; private Thread autoRenewThread; private Guid instance = Guid.NewGuid(); private TimeSpan timeout; In the above example, declarations are grouped (arbitrarily) so that the primitive types appear at the top. When viewing the code in Visual Studio, primitive types are a different color than non-primitives, so the grouping makes sense visually, if for no other reason. But I don't like it because the right margin is less of an aesthetic curve. I've always chalked this up to being OCD or something, but at least in my mind, the code is "prettier". Am I the only one?

    Read the article

  • Which of these URL scenarios is best for big link menus? [seo /user friendly urls]

    - by Sam
    Hi folks, a question about urls... me and a good friend of mine are exploring the possibilities of either of the three scenarios for a website where each webpage has a menusystem with about 130 links.: SCENARIO 1 the pages menu system has SHORT non-descriptive hyperlinks as well as a SHORT canonical: <a href:"design">dutch design</a> the pages canonical url points to e.g.: "design" OR SCENARIO 2 the pages menu system has SHORT non-descriptive hyperlinks wwith LONG canonical urls: <a href="design">dutch design</a> the pages canonical url points to: dutch-design-crazy-yes-but-always-honest OR SCENARIO 3 the pages menu system has LONG descriptive hyperlinks with LONG canonical urls: <a href="dutch-design-crazy-yes-but-always-honest">dutch design</a> the pages canonical url points to: dutch-design-crazy-yes-but-always-honest Currently we have scenario 2... should we progress to scenario 3? All three work fine and point via RewriteMod to the same page which is fetched underwater. Now, my question is which of these is better in terms of: userfriendlyness (page loading times, full url visible in url bar or not) seo friendlyness (proper indexing due to the urls containing descriptive relevant tags) other concerns we forgot like possible penalties for so many words in link hrefs?? Thanks very much for your suggestions: much appreciated!

    Read the article

  • How to cut the line between quality and time?

    - by m3th0dman
    On one hand, I have been taught by various software engineering books ([1] as example) that my job as a programmer is to make the best possible software: great design, flexibility, to be easily maintained etc. One the other hand although I realize that I actually write software for money and not for entertainment, although is very nice to write good code and plan ahead and refactor after writing and ... I wonder if it is always best for the business (after all we should be responsible). Is the business always benefiting from a best code? Maybe I'm over-engineering something, and it's not always useful? So how should I know when to stop in the process to achieving the best possible code? I am sure that experience is something that makes a difference here, but I believe this cannot be the only answer. [1] Uncle Bob's in Clean Code says at page 6 about the fact that: They [managers] may defend the schedule and requirements with passion; but that’s their job. It’s your job to defend the code with equal passion.

    Read the article

  • Remote desktop fails with user denied until client reboot

    - by Andrew J. Brehm
    Sometimes (but often enough to anoy) Remote Desktop Connection cannot connect to a server (2008 R2 but maybe also 2003) and claims that "The connection was denied because the user account is not authorized for remote login." The user is always authorized for remote login and the connection works from other clients. (Although this is the very same message that appears when a user really isn't autorized for remote login.) The problem always goes away after a client restart. The client is always Windows 7 but I have no (other) reason to assume that it only affects Windows 7 clients. Any idea what causes this?

    Read the article

  • What does motherboard RAM slot colors mean?

    - by totymedli
    I always saw that the motherboard RAM slots are colored in pairs, but never know what does it means. I just put the 2 RAM in, and after a few tries it always worked. But after I tried to install a third one it always throws me a blue screen of death. Is there an order how should I install RAM to the borad? What does the colors mean? Does they indicate performance boost opportunity or are they just a guide for installation?

    Read the article

  • Best way: restructure an existing Team Foundation Server (TFS) solution

    - by dhh
    In my department we are developing several smaller AddOns for some unified communication server. For versioning and distributed development we use a Team Foundation Server 2012. But: there is only one large TFS solution for all of our applications and libraries: Main Solution Applications App 1 App 2 App 3 Externals Libraries Lib 1 Lib 2 Tools The "Application" path contains all main applications. Those are not depending on each other, but they depend on the Libraries and Externals projects. The "Externals" path contains some external DLLs referenced in our Applications and Libraries. The Libraries path contains commonly used libs (UI templates, Helper classes, etc.). They do not depend on each other and they are referenced in the Libraries and the Tools projects. The Tools path contains some helper programs like setup helpers, update web services, etc. Now, there's some major points why I'd like to change this structure: We can't use server builds. It's uncomfortable to manage TFS scrum management with sprints, impediments, etc. with a solution structure like that. Every developer always has access to all projects in the solution. A complete build lasts too long if one accidentally hits [F6] in Visual Studio... What would you change in this solution? How would you break those projects into smaller Solutions, how should those solutions be structured. My first approach would be, to create one TFS project for each Application, Library and Tool. But how can I ensure that e.g. App 2 always contains the newest version of Lib 1? Do I have to monitor changes on Lib 1 and update App 2 manually as soon as the Lib changes? Or can I somehow force Visual Studio to always use the newest version of an external project somehow?

    Read the article

  • How well do graduates from top universities perform and how does it feel compare to the rest of the world?

    - by Amumu
    I always have impressions to those who got admitted into top Universities like MIT, Standford... for studying Engineering. I don't actually know what they are doing in the University or what they will do, but I always feel they can perform higher level tasks with more complexities. I always think that they are good at create and applying mathematical model in real life and I tend to agree: If you can't apply math, it's your problems, not math. I am a junior software engineer on embedded devices. I am learning more on Linux kernel and low level stuffs. Even so, my will is not strong enough to pursue technical path forever, with a final purpose is to create something significant on my own. May I have a chance to get on their level if I keep learning through experience and self-study? In my opinion, Math is the must have requirement, since it seems that programs on those Universities are very Math oriented. Without very strong math skill, how can one perform good in science and researching beyond making regular business products?

    Read the article

  • Moving email to folders in Outlook 2007

    - by Fred OBrien
    When i right click an email in Outlook 2007, it always defaults or opens to be saved in one particular folder. It's always the same folder. Actually a subfolder in a folder. It never used to do this. It would always default to the last folder that I had saved in before . Does anyone know how to stop this from happening? It is getting really time consuming and irritating. Would a windows update affect this?

    Read the article

  • Multiple Issues - USB booting Ubuntu 12.04

    - by Pixelishus
    I've been using a Ubuntu 12.04 off a bootable USB stick so I can have a portable OS. Also, OS variety. I've been using it for a while and I'm still have a few issues. First and most annoying, it often lags/freezes. I'm not sure what word you would use for it, windows will often stop working and go dark/not responding from anywhere between 10 seconds to a full minute and on some occasions, even longer. However, the window eventually starts working again. Similarly, sometimes the whole system will freeze, not just the window. The mouse will still move, but nothing will work. No clicking, no menus, not keyboard shortcuts. Again, it will usually start working. I'm liking Ubuntu a lot, but those issues can make it annoying to use sometimes. For example, it will ALWAYS freeze at some point if I try to watching a Youtube video and I'll have to wait for a minute or so until it starts responding, again. Aside from the lag/freezing, anytime I download packages, it will always say "package operation failed" when it's done, though it does seem to download/install. Another issue I'm having is with shutting down. If I open the logout/shutdown menu and click shutdown, it just logs me out and takes me to the login screen. If I try shutting down through the login screen, it won't do anything. As if I didn't even go to it. I've been using the terminal to reboot or shutdown when I need to. I've looked around for answers for all of these problems and still have yet to find a solution that works. Are these just normal issues with USB booting? I haven't installed Ubuntu to any computers, I've always done USB booting.

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >