Search Results

Search found 8381 results on 336 pages for 'bad neighbor'.

Page 175/336 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • error while trying to play .mp4 file

    - by Husni
    after agree to download and install extra multimedia plugins, it says package dependency can not be resolved, with below error: The following packages have unmet dependencies: gstreamer0.10-plugins-bad:i386: Depends: libc6 (>= 2.15) but 2.15-0ubuntu20 is to be installed Depends: libcairo2 (>= 1.2.4) but 1.12.3+git20120608.f228769d-0ubuntu0ricotz~quantal0 is to be installed Depends: libcdaudio1 (>= 0.99.12p2) but 0.99.12p2-12 is to be installed Depends: libcurl3-gnutls (>= 7.16.2) but 7.27.0-1ubuntu1 is to be installed Depends: libdvdnav4 (>= 4.2.0+20120524) but 4.2.0+20120524-2 is to be installed Depends: libfaad2 (>= 2.7) but 2.7-8 is to be installed Depends: libgcc1 (>= 1:4.1.1) but 1:4.7.2-2ubuntu1 is to be installed Depends: libglib2.0-0 (>= 2.31.8) but 2.34.0-1ubuntu1 is to be installed Depends: libgsm1 (>= 1.0.13) but 1.0.13-4 is to be installed Depends: libgstreamer-plugins-bad0.10-0 (= 0.10.23-7ubuntu1) but 0.10.23-7ubuntu1 is to be installed Depends: libgstreamer-plugins-base0.10-0 (>= 0.10.36) but 0.10.36-1ubuntu1 is to be installed Depends: libgstreamer0.10-0 (>= 0.10.36) but 0.10.36-1ubuntu2 is to be installed Depends: libmpcdec6 (>= 1:0.1~r435) but 2:0.1~r459-1ubuntu1 is to be installed Depends: libopenal1 (>= 1:1.13) but 1:1.14-4ubuntu1 is to be installed Depends: liborc-0.4-0 (>= 1:0.4.16) but 1:0.4.16-2 is to be installed Depends: libpng12-0 (>= 1.2.13-4) but 1.2.49-1ubuntu1 is to be installed Depends: librsvg2-2 (>= 2.14.4) but 2.36.3-0ubuntu1 is to be installed Depends: librtmp0 (>= 2.3) but 2.4+20111222.git4e06e21-1 is to be installed Depends: libschroedinger-1.0-0 (>= 1.0.9) but 1.0.11-2 is to be installed Depends: libsndfile1 (>= 1.0.20) but 1.0.25-5 is to be installed Depends: libssl1.0.0 (>= 1.0.0) but 1.0.1c-3ubuntu2 is to be installed Depends: libstdc++6 (>= 4.1.1) but 4.7.2-2ubuntu1 is to be installed Depends: libvpx1 (>= 1.0.0) but 1.1.0-1 is to be installed Depends: libxvidcore4 (>= 1.2.2) but 2:1.3.2-9 is to be installed

    Read the article

  • Why is Ubuntu One slow to sync in 11.10, either backup or any sub-folder contents?

    - by pst007x
    I have been trying to sync my documents folder of 1.4GB, it still hasn't worked and it has been syncing for a month. The top level syncs, files and folders in the Document folders, but contents of sub-folders just hang. (Gave up and stopped syncing this folder) However,I have tried using the backup facility in 11.10, to backup to Ubuntu One.... I upgraded my HDD space in Ubuntu One. It has been going now for 24hours-ish and only backed up what looks like a couple of percent. (By the way what an excellent idea to backup to Ubuntu One, if only we could get it to actually work! :-o) The odd thing is I can sync to drop box within hours, rather than months. This is bad, and has been an issue since Ubuntu One's release. I have reported this problem and there were promises in later releases this would be fixed, but it hasn't. Canonical cannot help either... I posted on several blogs, a lot of people have the same problem but no fixes. So do I use dropbox or another service, until it is sorted, as Ubuntu does not seem to see this as an issue, I think a fix will be a long time in coming. (However,I love the potential of Ubuntu One and the integration with the OS) Yes my internet speeds are fine, etc... :-) No firewall (sudo ufw status: STATUS: INACTIVE), No Proxy, etc NB: I have raised this as a separate question to others posted here, because my question relates to Ubuntu 11.10, though I have commented elsewhere for help. Plus my question also relates to deja-dup backup to Ubuntu One. Thanks

    Read the article

  • What's wrong with cplusplus.com?

    - by Kerrek SB
    This is perhaps not a perfectly suitable forum for this question, but let me give it a shot, at the risk of being moved away. There are several references for the C++ standard library, including the invaluable ISO standard, MSDN, IBM, cppreference, and cplusplus. Personally, when writing C++ I need a reference that has quick random access, short load times and usage examples, and I've been finding cplusplus.com pretty useful. However, I've been hearing negative opinions about that website frequently here on SO, so I would like to get specific: What are the errors, misconceptions or bad pieces of advice given by cplusplus.com? What are the risks of using it to make coding decisions? Let me add this point: I want to be able to answer questions here on SO with accurate quotes of the standard, and thus I would like to post immediately-usable links, and cplusplus.com would have been my choice site were it not for this issue. Update: There have been many great responses, and I have seriously changed my view on cplusplus.com. I'd like to list a few choice results here; feel free to suggest more (and keep posting answers). As of June 29, 2011: Incorrect description of some algorithms (e.g. remove). Information about the behaviour of functions is sometimes incorrect (atoi), fails to mention special cases (strncpy), or omits vital information (iterator invalidation). Examples contain deprecated code (#include style). Inexact terminology is doing a disservice to learners and the general community ("STL", "compiler" vs "toolchain"). Incorrect and misleading description of the typeid keyword.

    Read the article

  • How do I mount a CIFS share via FSTAB and give full RW to Guest

    - by Kendor
    I want to create a Public folder that has full RW access. The problem with my configuration is that Windows users have no issues as guests (they can RW and Delete), my Ubuntu client can't do the same. We can only write and read, but not create or delete. Here is the my smb.conf from my server: [global] workgroup = WORKGROUP netbios name = FILESERVER server string = TurnKey FileServer os level = 20 security = user map to guest = Bad Password passdb backend = tdbsam null passwords = yes admin users = root encrypt passwords = true obey pam restrictions = yes pam password change = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . add user script = /usr/sbin/useradd -m '%u' -g users -G users delete user script = /usr/sbin/userdel -r '%u' add group script = /usr/sbin/groupadd '%g' delete group script = /usr/sbin/groupdel '%g' add user to group script = /usr/sbin/usermod -G '%g' '%u' guest account = nobody syslog = 0 log file = /var/log/samba/samba.log max log size = 1000 wins support = yes dns proxy = no socket options = TCP_NODELAY panic action = /usr/share/samba/panic-action %d [homes] comment = Home Directory browseable = no read only = no valid users = %S [storage] create mask = 0777 directory mask = 0777 browseable = yes comment = Public Share writeable = yes public = yes path = /srv/storage The following FSTAB entry doesn't yield full R/W access to the share. //192.168.0.5/storage /media/myname/TK-Public/ cifs rw 0 0 This doesn't work either //192.168.0.5/storage /media/myname/TK-Public/ cifs rw,guest,iocharset=utf8,file_mode=0777,dir_mode=0777,noperm 0 0 Using the following location in Nemo/Nautilus w/o the Share being mounted does work: smb://192.168.0.5/storage/ Extra info. I just noticed that if I copy a file to the share after mounting, my Ubuntu client immediately make "nobody" be the owner, and the group "no group" has read and write, with everyone else as read-only. What am I doing wrong?

    Read the article

  • Do you think Scala will be the dominant JVM langauge, ie be the next Java? [on hold]

    - by user1037729
    From what I've read about Scala do far I think it has some nice features but I do not think it should be "the next Java". It might however end up being the next Java (due to fashion rather than fact) but lets not hope it does not... To me adds a lot of complexity over Java which is a simple and scalable language. Scala Pattern matching allows you to perform some type/value checking in a more concise way, this is possible in Java, Scala's pattern matching has a limit to it, you cannot continuously match deeper and deeper down the object graph, so why not just stick to Java and use decent invariants? Scala provides tuples, easy enough to make in Java, create a static factory method and it all reads nicely too. Scala provides mixins, why not just use composition? I believe Scala implicit's are bad, they can lead to code becoming complex and hard to maintain, explicitness is good. Scala provides closures, well they will be in Java 8 too. Scala has lazy keyword for lazy instantiation, this is easy enough to do in Java by calling a getter which creates the instance when needed, no hidden magic here. Scala can be used with AKKA, well so can Java, there is an Java AKKA implementation. Scala offers addition functional features but these can all be created in Java, there are many frameworks with have implemented functional features in Java. All in all Scala seems to offer is addition complexity and thats it...

    Read the article

  • I've been told that Exceptions should only be used in exceptional cases. How do I know if my case is exceptional?

    - by tieTYT
    My specific case here is that the user can pass in a string into the application, the application parses it and assigns it to structured objects. Sometimes the user may type in something invalid. For example, their input may describe a person but they may say their age is "apple". Correct behavior in that case is roll back the transaction and to tell the user an error occurred and they'll have to try again. There may be a requirement to report on every error we can find in the input, not just the first. In this case, I argued we should throw an exception. He disagreed, saying, "Exceptions should be exceptional: It's expected that the user may input invalid data, so this isn't an exceptional case" I didn't really know how to argue that point, because by definition of the word, he seems to be right. But, it's my understanding that this is why Exceptions were invented in the first place. It used to be you had to inspect the result to see if an error occurred. If you failed to check, bad things could happen without you noticing. Without exceptions every level of the stack needs to check the result of the methods they call and if a programmer forgets to check in one of these levels, the code could accidentally proceed and save invalid data (for example). Seems more error prone that way. Anyway, feel free to correct anything I've said here. My main question is if someone says Exceptions should be exceptional, how do I know if my case is exceptional?

    Read the article

  • Do large number of internal broken links affect SEO?

    - by TheBigK
    We've a WordPress blog and had disqus plugin in stalled for several months. Around late August this year, the plugin created a ton of URLs that linked to non-existent location on our website. For example - Correct URL: domain.com/correct-URL/ Disqus created - domain.com/correct-URL/344322/ - Throws 404 domain.com/correct-URL/433466/ - Throws 404 So essentially, Google found a LARGE number of broken links that pointed to unknown locations on our own domain. As the count of those errors (404) rose, our site suffered massive drop in traffic and crawl rate dropped to 10% of what it was earlier. I wish to know - Can large number of (we've over 99k of them) internal broken links cause rankings to drop? I've fixed the issue in one go by creating 301 redirects for each bad URL to correct URL and removing disqus. Google however drops the count by ~1000 daily, as I mark errors as 'fixed' in Google Webmaster Tools. Is there any way to speed this up? Should I setup custom crawl rate to 'Fast' in GWT to make Google crawl our website faster? I'd appreciate your inputs and experience sharing.

    Read the article

  • I'm applying for a position at a startup. To whom should I address my cover letter?

    - by sapphiremirage
    One of the co-founders answered questions about the company when the job was posted, but I feel like I shouldn't assume that he's the one who is in charge of hiring. Since it's relatively new and has a lot of name overlap with other things already on the web, it's hard to find any information about the company online, much less the name of their hiring manager. I'm not even certain that they do have a hiring manager, since I seem to remember that they are just an 8 person team. I've heard that "To whom it may concern" is tacky, and normally I would say something along the lines of "Dear Head of Human Resources", but that clearly doesn't work in this case. Any idea what my salutation should be? Later Edits: Final Version: To Joe Programmer and/or the AwesomeStartup.com hiring team, (+ a few words in first paragraph explaining why I am addressing Joe Programmer) I've already sent the email, so nothing you say here will save me. However, feel free to comment on my decision if you think your words be useful to future generations. Old Version (left here because some people responded to it): To the hiring manager for internships at Awesomestartup.com, Additionally, because so many people made comments about the content of my letter: I did spent several hours writing the cover letter itself and making sure that it was awesome. After spending such a long time working on the important part, I asked this question because I wanted to make sure that it wouldn't get passed over by some human who was having a bad day and decided that my salutation was inappropriate. Not likely when the most likely reader of that email is a programmer type, I know, but I figured that it wouldn't hurt not to be sloppy.

    Read the article

  • How do you support your code post employment end?

    - by James
    What is the process for leaving a company (or even a group/division) in terms of code support? Is it best to handle all questions? Do you give the remaining developers access to yourself as a future resource? If so, is there a way to not give full access? I've experienced first hand where answers about the general software arthitecture from the initial developer would be invaluable. I understand that if serious assistance is needed, than it becomes a typical case of employment negotiation as a support contract. However, should serious assistance be required, what steps can you make to ease that process of contacting you? I was thinking of doing something like making a (YOUR_NAME)_codesupport @ (YOUR_FAVORITE_EMAIL_CLIENT).com address. My Situation Specifics: I'm a co-op student, and as such bounce around companies on 4-month stints. This means introducing myself to a lot of new code bases, as well as leaving a fair share of orphaned code behind when I leave a company. I feel bad if I leave junk code around.

    Read the article

  • Concurrency pattern of logger in multithreaded application

    - by Dipan Mehta
    The context: We are working on a multi-threaded (Linux-C) application that follows a pipeline model. Each module has a private thread and encapsulated objects which do processing of data; and each stage has a standard form of exchanging data with next unit. The application is free from memory leak and is threadsafe using locks at the point where they exchange data. Total number of threads is about 15- and each thread can have from 1 to 4 objects. Making about 25 - 30 odd objects which all have some critical logging to do. Most discussion I have seen about different levels as in Log4J and it's other translations. The real big questions is about how the overall logging should really happen? One approach is all local logging does fprintf to stderr. The stderr is redirected to some file. This approach is very bad when logs become too big. If all object instantiate their individual loggers - (about 30-40 of them) there will be too many files. And unlike above, one won't have the idea of true order of events. Timestamping is one possibility - but it is still a mess to collate. If there is a single global logger (singleton) pattern - it indirectly blocks so many threads while one is busy putting up logs. This is unacceptable when processing of the threads are heavy. So what should be the ideal way to structure the logging objects? What are some of the best practices in actual large scale applications? I would also love to learn from some of the real designs of large scale applications to get inspirations from!

    Read the article

  • Newbie tips, please [closed]

    - by eXeP
    So, I just got a new computer and I want to put Ubuntu on my old laptop. I just need few tips before installing it. 1.Programs, where to download, how to download, what is the "ending" (windows has .exe) 2. How much is command line involved? And where to get the most usual commands? 3.Few programs you recommend (graphics editing, IDE, video player, web browser) 4. Do I have to download drivers when installing new OS? I plan on getting fully rid of Windows. I have no idea of the name of my graphics card, so how do I can get to know what it is if I have to download drivers? (I don't know the name because it's not on the original box, or anywhere on the internet, believe me) 5. When installing new OS does it destroy everything else on the hard drive? 6. Anti-virus, do I need one? I'm not super paranoid, and I don't visit "shady" sites. Please note that I have never used linux, or any other OS than Windows and sorry for my bad english. If this is the wrong place to post this, then please remove this. Thank you.

    Read the article

  • Weird system freeze. Nothing works keyboard/mouse/reset button - Ubuntu 12.04 64bits [closed]

    - by Simon
    I have fresh PC: i5 3570K with Intel 4000 onboard ASrock Z77 Extreme4-m 8GG RAM - Adata 1333Mhz... 1TB Seagate drive 7200rpm I've also fresh systems - Ubuntu/Win7 and today there was something strange. Ubuntu twice just freezed. Everything stopped, even keyboard and mouse wasn't responding. Even RESET button didn't work. Right now memtest is running, but I'd like to know, where else can i look for cause. Can it be software fault if even reset isn't working? Only long reset pressing rebooted PC... I'm a bit confused. Or should I test components - CPU, motherboard, disk. Which logs in Ubuntu should I check to diagnose cause? EHm I had few adventures with this PC already. Shipped motherboard was broken (ASrock Z68 Extreme3) and had old bios, so I had to contact with reseller, replace it and at the end decided on Z77, but everything took 3 weeks, so I have bad feelings... Edit: Both freezes were during editing something in gedit (it can be coincidence) and after few updates today - when memtest is end I'll check what was updated

    Read the article

  • Cleaning Up After Chrome

    - by Mark Treadwell
    I find Google Chrome, which I have no interest in, is continually getting installed on machines in my house, mostly due to Adobe Shockwave bringing it along as an install package. (Family members are agreeing to the download, not realizing the Chrome is getting dropped as well.) My major issue after uninstalling Chrome is that you can no longer click on links in Outlook emails. There is a lot on the web about this, and Google has not been proactive at fixing their uninstaller. I have now added a registry file to my Win64 systems to reset the problem registry keys and clear the error. This registry file is pretty simple. It merely resets HKEY_CURRENT_USER\Software\Classes\.htm, HKEY_CURRENT_USER\Software\Classes\.html, and HKEY_CURRENT_USER\Software\Classes\.shtml back to their default values of "htmlfile". Chrome takes over the handling of these file extensions because its default install is to make itself the default web browser. The Chrome uninstalled fails to clear/reset them. In troubleshooting this, I looked in my registry based on the web info on the Chrome uninstall problem. Since my system had never had Chrome installed, my registry did not have the problem keys. To troubleshoot, I installed (ugh!) and uninstalled Chrome. Sure enough, Chrome left the expected debris with a value string of "ChromeHTML.PR2EPLWMBQZK3BY7Z2BFBMFERU" or something similar. Resetting these values fixed the problem. I see that Chrome leaves quite a bit of debris behind in the registry. I guess it is creating the keys then leaving them behind, even though their presence (with bad data) subsequently affects operations.

    Read the article

  • SEO strategy for h1, h2, h3 tags for list of items

    - by Theo G
    On a page on my website page I have a list of ALL the products on my site. This list is growing rapidly and I am wondering how to manage it from an SEO point of view. I am shortly adding a title to this section and giving it an H1 tag. Currently the name of each product in this list is not h1,2,3,4 its just styled text. However I was looking to make these h2,3,4. Questions: Is the use of h2,3,4 on these list items bad form as they should be used for content rather than all links? I am thinking of limiting this main list to only 8 items and using h2 tags for each name. Do you this this will have a negative or possible affect over all. I may create a piece of script which counts the first 8 items on the list. These 8 will get the h2, and any after that will get h3 (all styled the same). If I do add h tags should I put just on the name of the product or the outside of the a tag, therefore collecting all info. Has anyone been in a similar situation as this, and if so did they really see any significant difference?

    Read the article

  • SearchServer2008Express Search Webservice

    - by Mike Koerner
    I was working on calling the Search Server 2008 Express search webservice from Powershell.  I kept getting <ResponsePacket xmlns="urn:Microsoft.Search.Response"><Response domain=""><Status>ERROR_NO_RESPONSE</Status><DebugErrorMessage>The search request was unable to connect to the Search Service.</DebugErrorMessage></Response></ResponsePacket>I checked the user authorization, the webservice search status, even the WSDL.  Turns out the URL for the SearchServer2008 search webservice was incorrect.  I was calling $URI= "http://ss2008/_vti_bin/spsearch.asmx?WSDL"and it should have been$URI= "http://ss2008/_vti_bin/search.asmx?WSDL"Here is my sample powershell script:# WSS Documentation http://msdn.microsoft.com/en-us/library/bb862916.aspx$error.clear()#Bad SearchServer2008Express Search URL $URI= "http://ss2008/_vti_bin/spsearch.asmx?WSDL"#Good SearchServer2008Express Search URL $URI= "http://ss2008/_vti_bin/search.asmx?WSDL"$search = New-WebServiceProxy -uri $URI -namespace WSS -class Search -UseDefaultCredential $queryXml = "<QueryPacket Revision='1000'>  <Query >    <SupportedFormats>      <Format revision='1'>urn:Microsoft.Search.Response.Document.Document</Format>    </SupportedFormats>    <Context>      <QueryText language='en-US' type='MSSQLFT'>SELECT Title, Path, Description, Write, Rank, Size FROM Scope() WHERE CONTAINS('Microsoft')</QueryText>      <!--<QueryText language='en-US' type='TEXT'>Microsoft</QueryText> -->    </Context>  </Query></QueryPacket>" $statusResponse = $search.Status()write-host '$statusResponse:'  $statusResponse $GetPortalSearchInfo = $search.GetPortalSearchInfo()write-host '$GetPortalSearchInfo:'  $GetPortalSearchInfo $queryResult = $search.Query($queryXml)write-host '$queryResult:'  $queryResult

    Read the article

  • Providing SSH tunnling, what to think about when configuring Ubuntu Server

    - by bigbadonk420
    Recently I've considered, mostly as a pet project, to set up accounts for a closed group of users via SSH to my box with the purpose of SSH tunnling things like web traffic -- some of it for friends that live abroad and perhaps also to help some people bypass national censorship. There's some things I imagine that I need to do, such as: Disabling shell access by setting the shell to /bin/false or similar. Get some software that can track bandwidth usage on a per-user basis historically Make sure that each user can only use a certain amount of bandwidth. The reason I'm posting here to begin with is to look around and get some pointers regarding what kind of things I should read up on, as well as hearing if there are any software recommendations for doing what I'm trying to do. I already know a bit since I've actually gotten SSH tunnling up and running already, I just don't feel like letting it loose to other people without restrictions and some basic monitoring. I'm primarily trying to learn here, so if you think this is a Very Bad Idea (or if you have a better idea on how to do this) then by all means say so, but please include some information on how to do it :) (I'm also open to trying things like OpenVPN but it seems really hard to set up, also I've heard SSH more often works in locked down environments)

    Read the article

  • Wrong resolution for Lightdm/GDM on Ubuntu 13.04 using HDMI

    - by f03lipe
    I've tried all the solution I could find on the matter so far, but the error persists. My problem is that the login screen (both under gdm and lightdm) runs with the wrong resolution, even though all is fine when I log in. The error occurs solely when I have my HDMI cable connected to my other screen. The login screen resolution becomes 1024x768 (for my 1366x768 laptop screen) and mirrored on my screen, which is 1920x1080. I've had this issue on version 12.04 (the last one before I upgraded to 13.04), but I got it fixed by adding the xrandr commands on the begining of the /etc/gdm/Init/Default file. This doesn't seem to work anymore. I've also tried telling lightdm to run a script fixing the resolution with xrandr (by editing /etc/lightdm/lightdm.conf), but lightdm crashes, and I'm forced to log in with low graphic settigs. Hint: when ubuntu is loading, the resolution starts OK, then goes bad right before the login screen is initialized. Does that mean that there's nothing wrong with my graphic cards? What do you think? Cheers!

    Read the article

  • Is there a canonical source supporting "all-surrogates"?

    - by user61852
    Background The "all-PK-must-be-surrogates" approach is not present in Codd's Relational Model or any SQL Standard (ANSI, ISO or other). Canonical books seems to elude this restrictions too. Oracle's own data dictionary scheme uses natural keys in some tables and surrogate keys in other tables. I mention this because these people must know a thing or two about RDBMS design. PPDM (Professional Petroleum Data Management Association) recommend the same canonical books do: Use surrogate keys as primary keys when: There are no natural or business keys Natural or business keys are bad ( change often ) The value of natural or business key is not known at the time of inserting record Multicolumn natural keys ( usually several FK ) exceed three columns, which makes joins too verbose. Also I have not found canonical source that says natural keys need to be immutable. All I find is that they need to be very estable, i.e need to be changed only in very rare ocassions, if ever. I mention PPDM because these people must know a thing or two about RDBMS design too. The origins of the "all-surrogates" approach seems to come from recommendations from some ORM frameworks. It's true that the approach allows for rapid database modeling by not having to do much business analysis, but at the expense of maintainability and readability of the SQL code. Much prevision is made for something that may or may not happen in the future ( the natural PK changed so we will have to use the RDBMS cascade update funtionality ) at the expense of day-to-day task like having to join more tables in every query and having to write code for importing data between databases, an otherwise very strightfoward procedure (due to the need to avoid PK colisions and having to create stage/equivalence tables beforehand ). Other argument is that indexes based on integers are faster, but that has to be supported with benchmarks. Obviously, long, varying varchars are not good for PK. But indexes based on short, fix-length varchar are almost as fast as integers. The questions - Is there any canonical source that supports the "all-PK-must-be-surrogates" approach ? - Has Codd's relational model been superceded by a newer relational model ?

    Read the article

  • ERP/CRM Systems. Desktop Based ? Web based? [closed]

    - by Parhs
    I have seen 2-3 ERPs in action. I am wondering what is better. Desktop based application or webbased displayed on a browser. My first experience was with a web based ERP when i was 14 years old.. It was web based and terribly slow... For most simple task you had to do lots of clicks... no keyboard support ..... Pages took ages to load. Last year I worked for migrating to a newer computer some old terminal based cobol application. The computer that worked till today and still has no problem was from 1993. The user interface ofcourse was textbased.. The speed that guys placed orders was amazing! just typing the name of the customer , then 5-10 keys to add a product to order.... Comparing to this ERP the page for placing orders Link (click sales orders) seems terribly slow to add a product... No keyboard shortcut works to save what you added and generally I believe you need 4 times more time to place an order compared to the text interface... Having to use both mouse and keyboard for this task is BAD and sadistic... So how can the heck these people ever use a system like that ??? So in the long run desktop application seems the only way... Of course browsers support shortcuts but the way to overide the defaults that browsers uses isn't cross compatible... That is a huge problem. Finnaly, if we MUST/forced use cloud in near future what about keyboard shortcuts?? I feel confused... I have seen converters of desktop applications to browser applications but are SLOW as hell... The question is what about user friendliness? What kind of application would you use?

    Read the article

  • Should we encourage coding styles in favor of developer's autonomy, or discourage it in favor of consistency?

    - by Saeed Neamati
    A developer writes if/else blocks with one-line code statements like: if (condition) // Do this one-line code else // Do this one-line code Another uses curly braces for all of them: if (condition) { // Do this one-line code } else { // Do this one-line code } A developer first instantiates an object, then uses it: HelperClass helper = new HelperClass(); helper.DoSomething(); Another developer instantiates and uses the object in one line: new HelperClass().DoSomething(); A developer is more easy with arrays, and for loops: string[] ordinals = new string[] {'First', 'Second', 'Third'}; for (i = 0; i < ordinals.Length; i++) { // Do something } Another writes: List<string> ordinals = new List<string>() {'First', 'Second', 'Third'}; foreach (string ordinal in ordinals) { // Do something } I'm sure that you know what I'm talking about. I call it coding style (cause I don't know what it's called). But whatever we call it, is it good or bad? Does encouraging it have an effect of higher productivity of developers? Should we ask developers to try to write code the way we tell them, so to make the whole system become style-consistent?

    Read the article

  • Planning milestones and time

    - by Ignas
    I was hired by a marketing company a year ago initially for link building / SEO stuff, but I'm actually a Web developer and took the job just in desperation to have one (I'm still quite young and just finished 2nd year of University). From the 3rd day my boss realised that I'm not into that stuff at all and since he had an idea of a web based app we started to plan it. I estimated that it shouldn't take me longer than two months to do it, but as I was making it we soon realised that we want to add more and more stuff to make it even better. So the development on my own lasted for about 4 months, but then it became an enterprise size app and we hired another programmer to work along me. The guy was awesome at what he did, but because I was assigned to be programmer/project manager I had to set up milestones with deadlines and we missed most of them, because most of the time it was too much work, and my lack of experience kept me setting really optimistic deadlines. We still kept adding features and had changed the architecture of the application twice. My boss is a great guy and he gets that when we add features it expands the time frame in which things should be done so he wasn't angry at me nor the other guy. But I was feeling bad (I still am) that I suck at planning. I gained loads of experience from the programming side, but I still lack the management/planning skills which make me go nuts. So over the last year I have dedicated probably about 8 months of work to this app (obviously my studies affected it) and we're launching as a closed beta this month. So my question is how do I get better at planning/managing a project, how do you estimate the times? What do you take into consideration when setting goals. I'm working alone again because the other guy moved from the city. But I'm sure we'll be hiring to help me maintain it so I need to get better at it. Any hints, points or anything on the topic are appreciated.

    Read the article

  • Thoughts on exception handling.

    - by AndyScott
    Was working on a windows form app (something I haven't done in a while), adding threading and logging so that it would work a little more smoothly and have a record of who did what.  I was just about at the point where I was going to check it into source control when I noticed that the Output window was showing "A first chance exception of type 'System.InvalidCastException' occurred in mscorlib.dll", so I googled it.  In reading some threads about the error, I came across the following comment and it got me thinking: "In addition, while they should be avoided if possible, exceptions are a quite legitimate part of program execution. It's their going unhandled that is a real issue, because that means crashy, crashy." How do you normally use exception handling?  I feel that exceptions are intended to handle errors in code (in my experience generally related to bad data making its way into the system).  Now don't get me wrong, I understand that exceptions happen and should be dealt with, but I feel that they are a "last resort" to keep a program from crashing, but should never be a way to pass data or continue logical processing that could be handled in standard code flow. I mention this, because I have seen it done. What do you think?

    Read the article

  • Question regarding Readability vs Processing Time

    - by Jordy
    I am creating a flowchart for a program with multiple sequential steps. Every step should be performed if the previous step is succesful. I use a c-based programming language so the lay-out would be something like this: METHOD 1: if(step_one_succeeded()) { if(step_two_succeeded()) { if(step_three_succeeded()) { //etc. etc. } } } If my program would have 15+ steps, the resulting code would be terribly unfriendly to read. So I changed my design and implemented a global errorcode that I keep passing by reference, make everything more readable. The resulting code would be something like this: METHOD 2: int _no_error = 0; step_one(_no_error); if(_no_error == 0) step_two(_no_error); if(_no_error == 0) step_three(_no_error); if(_no_error == 0) step_two(_no_error); The cyclomatic complexibility stays the same. Now let's say there are N number of steps. And let's assume that checking a condition is 1 clock long and performing a step doesn't take up time. The processing speed of Method1 can be anywhere between 1 and N. The processing speed of Method2 however is always equal to N-1. So Method1 will be faster most of the time. Which brings me to my question, is it bad practice to sacrifice time in order to make the code more readable? And why (not)?

    Read the article

  • Law of Demeter confusion [duplicate]

    - by user2158382
    This question already has an answer here: Rails: Law of Demeter Confusion 4 answers I am reading a book called Rails AntiPatterns and they talk about using delegation to to avoid breaking the Law of Demeter. Here is their prime example: They believe that calling something like this in the controller is bad (and I agree) @street = @invoice.customer.address.street Their proposed solution is to do the following: class Customer has_one :address belongs_to :invoice def street address.street end end class Invoice has_one :customer def customer_street customer.street end end @street = @invoice.customer_street They are stating that since you only use one dot, you are not breaking the Law of Demeter here. I think this is incorrect, because you are still going through customer to go through address to get the invoice's street. I primarily got this idea from a blog post I read: http://www.dan-manges.com/blog/37 In the blog post the prime example is class Wallet attr_accessor :cash end class Customer has_one :wallet # attribute delegation def cash @wallet.cash end end class Paperboy def collect_money(customer, due_amount) if customer.cash < due_ammount raise InsufficientFundsError else customer.cash -= due_amount @collected_amount += due_amount end end end The blog post states that although there is only one dot customer.cash instead of customer.wallet.cash, this code still violates the Law of Demeter. Now in the Paperboy collect_money method, we don't have two dots, we just have one in "customer.cash". Has this delegation solved our problem? Not at all. If we look at the behavior, a paperboy is still reaching directly into a customer's wallet to get cash out. Can somebody help me clear the confusion. I have been searching for the past 2 days trying to let this topic sink in, but it is still confusing.

    Read the article

  • Poor backlink profile - search rankings not updated for 2+ months

    - by fistameeny
    I am carrying out some work on a website that is a PR2 with a few good quality, relevant backlinks (PR4-6). It has a presence on Twitter that is updated regularly, a Google Places listing, and listings on some decent directories (Qype etc). The site was rebuilt into Drupal 7 two months ago, with all the basics done - URL rewriting, XML Sitemap submitted to Google, and most importantly, good quality, structured content. I've noticed that Google is still showing "old" URL's from the previous version of the site that was ditched 8 weeks ago. I think the site may be penalised under the Penguin update, as a previous SEO company created many low quality links from link farms/directories. My question is what the correct way to deal with this is. Bing Webmaster Tools can "disavow" links, and I guess I can attempt to contact the link farms to have them removed. I've already submitted a request to Google to request that we have the penalty removed as we're trying to tidy up a bad history. We submit updated sitemaps to Google and Bing daily, and have built some further decent quality, relevant links. Is there anything further I can do?

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >