Search Results

Search found 1146 results on 46 pages for 'jikan the useless'.

Page 1/46 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Byte Size Tips: How to Disable the Useless Dashboard on Mac OS X

    - by The Geek
    After getting my new MacBook Air with the awesome battery life, I decided to give OS X a spin for a while to see how I liked it. About 34 seconds later, I encountered my first irritation: The stupid Dashboard feature is just completely useless. Here’s how to disable it. Note: overall, Mac OS X is a really great operating system. It’s just this one feature that makes no sense. Simply Remove Dashboard from Spaces     

    Read the article

  • Strengthening code with possibly useless exception handling

    - by rdurand
    Is it a good practice to implement useless exception handling, just in case another part of the code is not coded correctly? Basic example A simple one, so I don't loose everybody :). Let's say I'm writing an app that will display a person's information (name, address, etc.), the data being extracted from a database. Let's say I'm the one coding the UI part, and someone else is writing the DB query code. Now imagine that the specifications of your app say that if the person's information is incomplete (let's say, the name is missing in the database), the person coding the query should handle this by returning "NA" for the missing field. What if the query is poorly coded and doesn't handle this case? What if the guy who wrote the query handles you an incomplete result, and when you try to display the informations, everything crashes, because your code isn't prepared to display empty stuff? This example is very basic. I believe most of you will say "it's not your problem, you're not responsible for this crash". But, it's still your part of the code which is crashing. Another example Let's say now I'm the one writing the query. The specifications don't say the same as above, but that the guy writing the "insert" query should make sure all the fields are complete when adding a person to the database to avoid inserting incomplete information. Should I protect my "select" query to make sure I give the UI guy complete informations? The questions What if the specifications don't explicitly say "this guy is the one in charge of handling this situation"? What if a third person implements another query (similar to the first one, but on another DB) and uses your UI code to display it, but doesn't handle this case in his code? Should I do what's necessary to prevent a possible crash, even if I'm not the one supposed to handle the bad case? I'm not looking for an answer like "(s)he's the one responsible for the crash", as I'm not solving a conflict here, I'd like to know, should I protect my code against situations it's not my responsibility to handle? Here, a simple "if empty do something" would suffice. In general, this question tackles redundant exception handling. I'm asking it because when I work alone on a project, I may code 2-3 times a similar exception handling in successive functions, "just in case" I did something wrong and let a bad case come through.

    Read the article

  • Is this kind of Design by Contract useless?

    - by Charlie Pigarelli
    I've just started informatics university and I'm attending a programming course about C(++). The programming professor prefers to teach very few things (in 3 month we have just reached the functions topic) and connect every topic with a type of programming design that somehow is similar to the Design by Contract design. Basically what he ask us to do is to write every exercise with comments Pre-conditions, Post-conditions and Invariants that should prove the correctness of each program we write. But this doesn't make any sense to me. I mean, ok: maybe writing down your thoughts prevent you from doing some mistakes, but if this is all an abstract thing, then if your program intuition is wrong you'll write your program wrong and then you'll also write pre and post conditions wrong probably auto convincing your self about its correctness. Most of the time, both me and other students have written programs that seemed ok and that had correct pre and post condition too. But at the moment of testing it was just completely wrong. I had some experience before this course of programming and I had written a lot of line of code before and I found myself comfortably with just writing a program and unit test it. It take less time to accomplish and is less "abstract" than just thinking about what every single piece of your program should do in every case (which is kinda like mentally testing it). Finally, all this pre and post conditions takes me like 80% of the total time of the exercise. It's harder to think about putting down this pre and post correct than to write the program itself. Since we are like the only course of the only university probably in the entire world that makes this things, could someone please tell me how should I manage this thing? Am I right thinking that this doesn't worth anything? Should I change university? (there are like double of the people attending that course and it seems that usually very few people passes the exam the first year). Should I convince myself it's method is right?

    Read the article

  • Minimize useless tweaking of a numeric app

    - by Potatoswatter
    I'm developing a numeric application (nonlinear optimizer), with a zillion knobs to tweak and rising. It's not my first foray into this domain, but this time there are even more variables in the code and I'm on a tight schedule. Don't want to waste time fiddling. Days or even months can potentially be wasted adjusting variables, recompiling, and reprocessing benchmark datasets. The resulting data is viewed and trouble spots are checked. The overall quality of the solution is reported by the program but the meaning of the report could change over time. (Numeric units for the report are one thing I'm trying to nail down.) One main problem is organizing result files to identify each with specific code changes. Note taking can be a pain, is there software to help with this? Are there agreed best practices to making this kind of development cycle reliably move forward? The solver package converges to its optimal solution with mechanical determination, but I'm all too familiar with the way an excess of design decisions can mire development.

    Read the article

  • Windows Azure Diagnostics: Next to Useless?

    - by Your DisplayName here!
    To quote my good friend Christian: “Tracing is probably one of the most discussed topics in the Windows Azure world. Not because it is freaking cool – but because it can be very tedious and partly massively counter-intuitive.” <rant> The .NET Framework has this wonderful facility called TraceSource. You define a named trace and route that to a configurable listener. This gives you a lot of flexibility – you can create a single trace file – or multiple ones. There is even nice tooling around that. SvcTraceViewer from the SDK let’s you open the XML trace files – you can filter and sort by trace source and event type, aggreate multiple files…blablabla. Just what you would expect from a decent tracing infrastructure. Now comes Windows Azure. I was already were grateful that starting with the SDK 1.2 we finally had a way to do tracing and diagnostics in the cloud (kudos!). But the way the Azure DiagnosticMonitor is currently implemented – could be called flawed. The Azure SDK provides a DiagnosticsMonitorTraceListener – which is the right way to go. The only problem is, that way this works is, that all traces (from all sources) get written to an ETW trace. Then the DiagMon listens to these traces and copies them periodically to your storage account. So far so good. But guess what happens to your nice trace files: the trace source names get “lost”. They appear in your message text at the end. So much for filtering and sorting and aggregating (regex #fail or #win??). Every trace line becomes an entry in a Azure Storage Table – the svclog format is gone. So much for the existing tooling. To solve that problem, one workaround was to write your own trace listener (!) that creates svclog files inside of local storage and use the DiagMon to copy those. Christian has a blog post about that. OK done that. Now it turns out that this mechanism does not work anymore in 1.3 with FullIIS (see here). Quoting: “Some IIS 7.0 logs not collected due to permissions issues...The root cause to both of these issues is the permissions on the log files.” And the workaround: “To read the files yourself, log on to the instance with a remote desktop connection.” Now then have fun with your multi-instance deployments…. </rant>

    Read the article

  • How prevent useless content load on the page in Responsive Design

    - by Ícaro Leandro
    In Responsive Design, we hide elements in the page with @media queries and display: hide in the CSS. Ok, But in my system: Browsers that have less than width: 800px, the layout must hide some content, not only hide, but avoid them load fully. I mean, in access with desktop with more than 800px of screen, the page load fully; In mobile devices, or even in desktop with less than 800px, not load some content. I want to make the page load faster in this browsers. The system are maked in PHP and have some Javascript. Thanks...

    Read the article

  • 12.04, nvidia-settings makes one of my dual monitors grey and useless, disables network

    - by Kerrick
    I'm running Ubuntu 12.04 64-bit, Precise Pangolin, with a PNY GTS 250 1GB video card and a monitor plugged into each of the DVI ports. I'm using the proprietary drivers (post-release updates). If I set anything to do with Separate X Screens up in nvidia-settings (and write it to xorg.conf and reboot), my second monitor has a grey background, no menu bar, no ability to have a window on it, the second monitor doesn't get picked up in a screneshot, and if I move my mouse cursor to it it's an ugly black X. Plus, my network is unable to connect to anything. If I subsequently delete /etc/X11/xorg.conf and reboot, everything goes back to working, albeit with a single monitor activated. If I set anything to do with TwinView up in nvidia-settings, my second monitor starts working, but it isn't seen as a second monitor by Ubuntu, so I can't apply color calibration to it separately. Plus, my mouse gets "caught" between the monitors every time I try to move my cursor between the two. What gives? If it helps, this is the xorg.conf that nvidia-settings generates for Separate X Screens.

    Read the article

  • nvidia-settings makes one of my dual monitors grey and useless, disables network

    - by Kerrick
    I'm running Ubuntu 12.04 64-bit, Precise Pangolin, with a PNY GTS 250 1GB video card and a monitor plugged into each of the DVI ports. I'm using the proprietary drivers (post-release updates). If I set anything to do with Separate X Screens up in nvidia-settings (and write it to xorg.conf and reboot), my second monitor has a grey background, no menu bar, no ability to have a window on it, the second monitor doesn't get picked up in a screneshot, and if I move my mouse cursor to it it's an ugly black X. Plus, my network is unable to connect to anything. If I subsequently delete /etc/X11/xorg.conf and reboot, everything goes back to working, albeit with a single monitor activated. If I set anything to do with TwinView up in nvidia-settings, my second monitor starts working, but it isn't seen as a second monitor by Ubuntu, so I can't apply color calibration to it separately. Plus, my mouse gets "caught" between the monitors every time I try to move my cursor between the two. What gives? If it helps, this is the xorg.conf that nvidia-settings generates for Separate X Screens.

    Read the article

  • Port forwarding on D-Link DIR-615 super-slow, useless

    - by Jaroslav Záruba
    Hello I have replaced my old router with DIR-615 from D-Link, and now the port forwarding is so slow it makes the router practically useless. Accessing the router itself (admin UI) is without issues, no delay whatsoever. But when I try to access a service on another computer in the network the requests take minutes and minutes. (E.g. I can see source of my GWT-app main page, but loading additional CSS and JS files takes years.) If anyone could recommend any further diagnostics I should do to figure out what is happening it would be great. Few notes: happens with more services (web-app on Tomcat, viewing directory index via Apache) it does not make a difference whether the service is hosted on wired or wireless PC accessing the service on a localhost works fine turning off firewall on the target PC does not make difference either (makes sense) when I replace this router with the old one (both 192.168.1.1) everything works fine I see nothing suspicious in the router's log I believe I have the latest firmware (4.11) DIR-615 sucks, it already died once completely Regards Jarda Z.

    Read the article

  • Port forwarding on D-Link DIR-615 super-slow, useless

    - by Jaroslav Záruba
    Hello I have replaced my old router with DIR-615 from D-Link, and now the port forwarding is so slow it makes the router practically useless for requests coming from outside of my network. Accessing the router itself (admin UI) from outside is without any issues, no delay whatsoever. But when I try to access a service residing on any of the computers in my network from outside the requests take minutes and minutes. (E.g. I can see source of my GWT-app main page, but loading additional CSS and JS files takes years.) If anyone could recommend any further diagnostics I should do to figure out what is happening it would be great. Few notes: happens with more services (web-app on Tomcat, viewing directory index via Apache) it does not make a difference whether the service is hosted on wired or wireless PC accessing the service on a localhost works fine, as does any 'inner' communication turning off firewall on target PC does not make difference either (makes sense) when I replace this router with the old one (both 192.168.1.1) everything works fine I see nothing suspicious in the router's log I believe I have the latest firmware (4.11) DIR-615 sucks, it already died once completely Regards Jarda Z.

    Read the article

  • No speed-up with useless printf's using OpenMP

    - by t2k32316
    I just wrote my first OpenMP program that parallelizes a simple for loop. I ran the code on my dual core machine and saw some speed up when going from 1 thread to 2 threads. However, I ran the same code on a school linux server and saw no speed-up. After trying different things, I finally realized that removing some useless printf statements caused the code to have significant speed-up. Below is the main part of the code that I parallelized: #pragma omp parallel for private(i) for(i = 2; i <= n; i++) { printf("useless statement"); prime[i-2] = is_prime(i); } I guess that the implementation of printf has significant overhead that OpenMP must be duplicating with each thread. What causes this overhead and why can OpenMP not overcome it?

    Read the article

  • Useless variable name in C struct type definition

    - by user1210233
    I'm implementing a linked list in C. Here's a struct that I made, which represents the linked list: typedef struct llist { struct lnode* head; /* Head pointer either points to a node with data or NULL */ struct lnode* tail; /* Tail pointer either points to a node with data or NULL */ unsigned int size; /* Size of the linked list */ } list; Isn't the "llist" basically useless. When a client uses this library and makes a new linked list, he would have the following declaration: list myList; So typing llist just before the opening brace is practically useless, right? The following code basically does the same job: typedef struct { struct lnode* head; /* Head pointer either points to a node with data or NULL */ struct lnode* tail; /* Tail pointer either points to a node with data or NULL */ unsigned int size; /* Size of the linked list */ } list;

    Read the article

  • Useless Plesk Error?

    - by Josh Pennington
    I am trying to determine why Plesk on my server won't start and I have no idea where to even start (since my hosting company appears to not want to help me out). Anyways, the error in my Plesk error_log is as follows: 2010-12-25 21:30:28: (log.c.75) server started 2010-12-25 21:30:28: (network.c.336) SSL: error:00000000:lib(0):func(0):reason(0) 2010-12-25 21:30:28: (log.c.75) server started 2010-12-25 21:30:28: (network.c.336) SSL: error:00000000:lib(0):func(0):reason(0) It leads me to believe its a problem with the SSL on the server but I am not sure what to make of the error. Can someone lead me in the right direction? Thanks Josh Pennington

    Read the article

  • String useless character strip - PHP

    - by Zoltan Repas
    Hi! I've got a huge problem. I made a special ID for the things in our webpage. Let's see an example: H0059 - this is the special ID called registration number. The last two chars are the things' id. I'd like to cut off the useless characters, to get the real ID, what means strip the first char, and all the 0s before any other numbers. (Example: L0745 = 745, V1754 = 1754, L0003 = 3, B0141 = 141, P0040 = 40, V8000 = 8000) Please help me in this. I've tried with strreplace and explode but failed :( Thanks for the help.

    Read the article

  • Are there any useless language features in the C or C++ language? [closed]

    - by ThePlan
    As a beginner I often found myself saying "This is so useless. How can this ever help me?" only to find out later on that it is essential and I was a fool. The moral, however, tends to be that the C or the C++ language are perfect and they contain nothing of no-use to programmers. I'm asking this question for 2 reasons: out of pure curiosity and so that I may know what not to learn. In a nutshell - are there any useless language features in the C or the C++ language, which good programmers almost never use? It would also be of great help if you would mention weather they are a bad practice when used or just purely useless.

    Read the article

  • XCode 3.2.1 and Instruments: Useless Stack Trace

    - by Jason George
    I've reached the stage where it's time to start tracking down memory leaks and, to my dismay, Instruments is giving me very little to go on (other than the fact that I definitely have leaks). My stack trace contains no information other than memory addresses. Since I'm working on a new project and I've transitioned to version 3.2.1 of XCode in tandem, I'm not sure if it's my program configuration or XCode that's causing the problem. I have found one reference to the issue coupled with a post on the dyld leak that seems to be prevalent with the 3.2.1 release. Since I haven't been able to find much on the problem I'm guessing it's something I've created rather than a systematic issue with XCode. If someone has any idea where I might have thrown a wrench in the works, I would love some pointers. Also, if someone could just verify that the stack trace is indeed functioning properly in 3.2.1 that would be useful as well.

    Read the article

  • Seemingly useless debugging environment for Android

    - by mikeY
    I've just started debugging my first three line long android app and I can't seem to use the debug tool like I want to. Here's my code: public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); int a = 1 / 0; } Now I expect the debugger to halt the thread and show me the line number of statement where the division by zero occurs. No, instead it shows some other method internal to the system for which I have no source. To make the matters worse, there is no exception message either. Prior to this app, I created one which would do something when a button was pressed. If any exception was raised, again no useful line number or exception message would be shown. As of right now, there is no way to debug my app. Any ideas? I'm using the latest SDK along with Eclipse ADT plugin and debugging on a real device (Nexus One).

    Read the article

  • volatile keyword seems to be useless?

    - by Finbarr
    import java.util.concurrent.CountDownLatch; import java.util.concurrent.atomic.AtomicInteger; public class Main implements Runnable { private final CountDownLatch cdl1 = new CountDownLatch(NUM_THREADS); private volatile int bar = 0; private AtomicInteger count = new AtomicInteger(0); private static final int NUM_THREADS = 25; public static void main(String[] args) { Main main = new Main(); for(int i = 0; i < NUM_THREADS; i++) new Thread(main).start(); } public void run() { int i = count.incrementAndGet(); cdl1.countDown(); try { cdl1.await(); } catch (InterruptedException e1) { e1.printStackTrace(); } bar = i; if(bar != i) System.out.println("Bar not equal to i"); else System.out.println("Bar equal to i"); } } Each Thread enters the run method and acquires a unique, thread confined, int variable i by getting a value from the AtomicInteger called count. Each Thread then awaits the CountDownLatch called cdl1 (when the last Thread reaches the latch, all Threads are released). When the latch is released each thread attempts to assign their confined i value to the shared, volatile, int called bar. I would expect every Thread except one to print out "Bar not equal to i", but every Thread prints "Bar equal to i". Eh, wtf does volatile actually do if not this?

    Read the article

  • useless class storage specifier in empty declaration

    - by robUK
    Hello, gcc 4.4.1 c89 I have the following code and I get a warning: unless class storage specifier in empty declaration static enum states { ACTIVE, RUNNING, STOPPED, IDLE } However, if i remove the static keyword I don't get that warning. I am compiling with the following flags: -Wall -Wextra Many thanks for any suggestions,

    Read the article

  • What C++ coding standard do you use?

    - by gablin
    For some time now, I've been unable to settle on a coding standard and use it concistently between projects. When starting a new project, I tend to change some things around (add a space there, remove a space there, add a line break there, an extra indent there, change naming conventions, etc.). So I figured that I might provide a piece of sample code, in C++, and ask you to rewrite it to fit your standard of coding. Inspiration is always good, I say. ^^ So here goes: #ifndef _DERIVED_CLASS_H__ #define _DERIVED_CLASS_H__ /** * This is an example file used for sampling code layout. * * @author Firstname Surname */ #include <stdio> #include <string> #include <list> #include "BaseClass.h" #include "Stuff.h" /** * The DerivedClass is completely useless. It represents uselessness in all its * entirety. */ class DerivedClass : public BaseClass { //////////////////////////////////////////////////////////// // CONSTRUCTORS / DESTRUCTORS //////////////////////////////////////////////////////////// public: /** * Constructs a useless object with default settings. * * @param value * Is never used. * @throws Exception * If something goes awry. */ DerivedClass (const int value) : uselessSize_ (0) {} /** * Constructs a copy of a given useless object. * * @param object * Object to copy. * @throws OutOfMemoryException * If necessary data cannot be allocated. */ ItemList (const DerivedClass& object) {} /** * Destroys this useless object. */ ~ItemList (); //////////////////////////////////////////////////////////// // PUBLIC METHODS //////////////////////////////////////////////////////////// public: /** * Clones a given useless object. * * @param object * Object to copy. * @return This useless object. */ DerivedClass& operator= (const DerivedClass& object) { stuff_ = object.stuff_; uselessSize_ = object.uselessSize_; } /** * Does absolutely nothing. * * @param useless * Pointer to useless data. */ void doNothing (const int* useless) { if (useless == NULL) { return; } else { int womba = *useless; switch (womba) { case 0: cout << "This is output 0"; break; case 1: cout << "This is output 1"; break; case 2: cout << "This is output 2"; break; default: cout << "This is default output"; break; } } } /** * Does even less. */ void doEvenLess () { int mySecret = getSecret (); int gather = 0; for (int i = 0; i < mySecret; i++) { gather += 2; } } //////////////////////////////////////////////////////////// // PRIVATE METHODS //////////////////////////////////////////////////////////// private: /** * Gets the secret value of this useless object. * * @return A secret value. */ int getSecret () const { if ((RANDOM == 42) && (stuff_.size() > 0) || (1000000000000000000 > 0) && true) { return 420; } else if (RANDOM == -1) { return ((5 * 2) + (4 - 1)) / 2; } int timer = 100; bool stopThisMadness = false; while (!stopThisMadness) { do { timer--; } while (timer > 0); stopThisMadness = true; } } //////////////////////////////////////////////////////////// // FIELDS //////////////////////////////////////////////////////////// private: /** * Don't know what this is used for. */ static const int RANDOM = 42; /** * List of lists of stuff. */ std::list <Stuff> stuff_; /** * Specifies the size of this object's uselessness. */ size_t uselessSize_; }; #endif

    Read the article

  • Will TSQL become useless because of new ORMs? [closed]

    - by Saeed Neamati
    By introducing LINQ to SQL, I found myself and my .NET developer colleagues gradually moving from TSQL to C# to create queries on the database. Entity Framework made that shift almost permanent. Now it's nearly 2 years that I use LINQ to SQL and LINQ to Entities and haven't used TSQL that much. Yesterday, a colleague encountered a problem (he had to create a SP) and we went to help him. But we all found that our TSQL knowledge was diminished for sure, and a simple SP that seemed trivial to us 2 or 3 years ago, was a challenge to be solved yesterday. Thus it came to my mind that while TSQL's life is attached to SQL Server, and logically as long as SQL Server lives and doesn't change it's SQL language, TSQL would also live, practically it might die, and soon very few people might know it. Am I right? Do existence of ORMs like Entity Framework threaten TSQL's life and usability?

    Read the article

  • How do I prevent useless content load on the page in responsive design?

    - by Ícaro Leandro
    In responsive design, elements are hidden in the page with @media queries and display: none in CSS. Ok. In my design however browsers that have less than 800px in width should avoid loading some content at all. When accessed with on a device with more than 800px of screen, the page loads fully. In mobile devices or even on desktop with less than 800px of width some content is hidden. I want to make the page load faster for low-resolution devices and avoid loading chunks of content that the user will never see. How can I go about this?

    Read the article

  • Files: Name column basically useless in list view. Feature or bug?

    - by Luksurious
    So the following is happening in Files whenever the width of the window is somewhat smaller than all the content of the list: In list view the name column is cropped unto a point where no name is visible at all! And it is even not possible to change the column size manually. Funnily, in some situations it quickly flickers from a larger column width to the small one back and forth before settling on the small size. Unnecessarily to say, it is extremely annoying. Is there a way around this or is this just "bad design"? Oh yeah, Ubuntu 13.04 & 13.10, Files version 3.8.2.

    Read the article

  • How can I stop Excel from eating my delicious CSV files and excreting useless data?

    - by atroon
    I have a database which tracks sales of widgets by serial number. Users enter purchaser data and quantity, and scan each widget into a custom client program. They then finalize the order. This all works flawlessly. Some customers want an Excel-compatible spreadsheet of the widgets they have purchased. We generate this with a PHP script which queries the database and outputs the result as a CSV with the store name and associated data. This works perfectly well too. When opened in a text editor such as Notepad or vi, the file looks like this: "Account Number","Store Name","S1","S2","S3","Widget Type","Date" "4173","SpeedyCorp","268435459705526269","","268435459705526269","848 Model Widget","2011-01-17" As you can see, the serial numbers are present (in this case twice, not all secondary serials are the same) and are long strings of numbers. When this file is opened in Excel, the result becomes: Account Number Store Name S1 S2 S3 Widget Type Date 4173 SpeedyCorp 2.68435E+17 2.68435E+17 848 Model Widget 2011-01-17 As you may have observed, the serial numbers are enclosed by double quotes. Excel does not seem to respect text qualifiers in .csv files. When importing these files into Access, we have zero difficulty. When opening them as text, no trouble at all. But Excel, without fail, converts these files into useless garbage. Trying to instruct end users in the art of opening a CSV file with a non-default application is becoming, shall we say, tiresome. Is there hope? Is there a setting I've been unable to find? This seems to be the case with Excel 2003, 2007, and 2010.

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >