Search Results

Search found 55134 results on 2206 pages for 'argument error'.

Page 588/2206 | < Previous Page | 584 585 586 587 588 589 590 591 592 593 594 595  | Next Page >

  • C/C++: Who uses the logical operator macros from iso646.h and why?

    - by Jaime Soto
    There has been some debate at work about using the merits of using the alternative spellings for C/C++ logical operators in iso646.h: and && and_eq &= bitand & bitor | compl ~ not ! not_eq != or || or_eq |= xor ^ xor_eq ^= According to Wikipedia, these macros facilitate typing logical operators in international (non-US English?) and non-QWERTY keyboards. All of our development team is in the same office in Orlando, FL, USA and from what I have seen we all use the US English QWERTY keyboard layout; even Dvorak provides all the necessary characters. Supporters of using the iso646.h macros claim we should them because they are part of the C and C++ standards. I think this argument is moot since digraphs and trigraphs are also part of these standards and they are not even supported by default in many compilers. My rationale for opposing these macros in our team is that we do not need them since: Everybody on our team uses the US English QWERTY keyboard layout; C and C++ programming books from the US barely mention iso646.h, if at all; and new developers may not be familiar with iso646.h (this is expected if they are from the US). /rant Finally, to my set of questions: Does anyone in this site use the iso646.h logical operator macros? Why? What is your opinion about using the iso646.h logical operator macros in code written and maintained on US English QWERTY keyboards? Is my digraph and trigraph analogy a valid argument against using iso646.h with US English QWERTY keyboard layouts? EDIT: I missed two similar questions in StackOverflow: Is anybody using the named boolean operators? Which C++ logical operators do you use: and, or, not and the ilk or C style operators? why?

    Read the article

  • The Return Of __FILE__ And __LINE__ In .NET 4.5

    - by Alois Kraus
    Good things are hard to kill. One of the most useful predefined compiler macros in C/C++ were __FILE__ and __LINE__ which do expand to the compilation units file name and line number where this value is encountered by the compiler. After 4.5 versions of .NET we are on par with C/C++ again. It is of course not a simple compiler expandable macro it is an attribute but it does serve exactly the same purpose. Now we do get CallerLineNumberAttribute  == __LINE__ CallerFilePathAttribute        == __FILE__ CallerMemberNameAttribute  == __FUNCTION__ (MSVC Extension)   The most important one is CallerMemberNameAttribute which is very useful to implement the INotifyPropertyChanged interface without the need to hard code the name of the property anymore. Now you can simply decorate your change method with the new CallerMemberName attribute and you get the property name as string directly inserted by the C# compiler at compile time.   public string UserName { get { return _userName; } set { _userName=value; RaisePropertyChanged(); // no more RaisePropertyChanged(“UserName”)! } } protected void RaisePropertyChanged([CallerMemberName] string member = "") { var copy = PropertyChanged; if(copy != null) { copy(new PropertyChangedEventArgs(this, member)); } } Nice and handy. This was obviously the prime reason to implement this feature in the C# 5.0 compiler. You can repurpose this feature for tracing to get your hands on the method name of your caller along other stuff very fast now. All infos are added during compile time which is much faster than other approaches like walking the stack. The example on MSDN shows the usage of this attribute with an example public static void TraceMessage(string message, [CallerMemberName] string memberName = "", [CallerFilePath] string sourceFilePath = "", [CallerLineNumber] int sourceLineNumber = 0) { Console.WriteLine("Hi {0} {1} {2}({3})", message, memberName, sourceFilePath, sourceLineNumber); }   When I do think of tracing I do usually want to have a API which allows me to Trace method enter and leave Trace messages with a severity like Info, Warning, Error When I do print a trace message it is very useful to print out method and type name as well. So your API must either be able to pass the method and type name as strings or extract it automatically via walking back one Stackframe and fetch the infos from there. The first glaring deficiency is that there is no CallerTypeAttribute yet because the C# compiler team was not satisfied with its performance.   A usable Trace Api might therefore look like   enum TraceTypes { None = 0, EnterLeave = 1 << 0, Info = 1 << 1, Warn = 1 << 2, Error = 1 << 3 } class Tracer : IDisposable { string Type; string Method; public Tracer(string type, string method) { Type = type; Method = method; if (IsEnabled(TraceTypes.EnterLeave,Type, Method)) { } } private bool IsEnabled(TraceTypes traceTypes, string Type, string Method) { // Do checking here if tracing is enabled return false; } public void Info(string fmt, params object[] args) { } public void Warn(string fmt, params object[] args) { } public void Error(string fmt, params object[] args) { } public static void Info(string type, string method, string fmt, params object[] args) { } public static void Warn(string type, string method, string fmt, params object[] args) { } public static void Error(string type, string method, string fmt, params object[] args) { } public void Dispose() { // trace method leave } } This minimal trace API is very fast but hard to maintain since you need to pass in the type and method name as hard coded strings which can change from time to time. But now we have at least CallerMemberName to rid of the explicit method parameter right? Not really. Since any acceptable usable trace Api should have a method signature like Tracexxx(… string fmt, params [] object args) we not able to add additional optional parameters after the args array. If we would put it before the format string we would need to make it optional as well which would mean the compiler would need to figure out what our trace message and arguments are (not likely) or we would need to specify everything explicitly just like before . There are ways around this by providing a myriad of overloads which in the end are routed to the very same method but that is ugly. I am not sure if nobody inside MS agrees that the above API is reasonable to have or (more likely) that the whole talk about you can use this feature for diagnostic purposes was not a core feature at all but a simple byproduct of making the life of INotifyPropertyChanged implementers easier. A way around this would be to allow for variable argument arrays after the params keyword another set of optional arguments which are always filled by the compiler but I do not know if this is an easy one. The thing I am missing much more is the not provided CallerType attribute. But not in the way you would think of. In the API above I did add some filtering based on method and type to stay as fast as possible for types where tracing is not enabled at all. It should be no more expensive than an additional method call and a bool variable check if tracing for this type is enabled at all. The data is tightly bound to the calling type and method and should therefore become part of the static type instance. Since extending the CLR type system for tracing is not something I do expect to happen I have come up with an alternative approach which allows me basically to attach run time data to any existing type object in super fast way. The key to success is the usage of generics.   class Tracer<T> : IDisposable { string Method; public Tracer(string method) { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public void Dispose() { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public static void Info(string fmt, params object[] args) { } /// <summary> /// Every type gets its own instance with a fresh set of variables to describe the /// current filter status. /// </summary> /// <typeparam name="T"></typeparam> internal class TraceData<UsingType> { internal static TraceData<UsingType> Instance = new TraceData<UsingType>(); public bool IsInitialized = false; // flag if we need to reinit the trace data in case of reconfigured trace settings at runtime public TraceTypes Enabled = TraceTypes.None; // Enabled trace levels for this type } } We do not need to pass the type as string or Type object to the trace Api. Instead we define a generic Api that accepts the using type as generic parameter. Then we can create a TraceData static instance which is due to the nature of generics a fresh instance for every new type parameter. My tests on my home machine have shown that this approach is as fast as a simple bool flag check. If you have an application with many types using tracing you do not want to bring the app down by simply enabling tracing for one special rarely used type. The trace filter performance for the types which are not enabled must be therefore the fasted code path. This approach has the nice side effect that if you store the TraceData instances in one global list you can reconfigure tracing at runtime safely by simply setting the IsInitialized flag to false. A similar effect can be achieved with a global static Dictionary<Type,TraceData> object but big hash tables have random memory access semantics which is bad for cache locality and you always need to pay for the lookup which involves hash code generation, equality check and an indexed array access. The generic version is wicked fast and allows you to add more features to your tracing Api with minimal perf overhead. But it is cumbersome to write the generic type argument always explicitly and worse if you do refactor code and move parts of it to other classes it might be that you cannot configure tracing correctly. I would like therefore to decorate my type with an attribute [CallerType] class Tracer<T> : IDisposable to tell the compiler to fill in the generic type argument automatically. class Program { static void Main(string[] args) { using (var t = new Tracer()) // equivalent to new Tracer<Program>() { That would be really useful and super fast since you do not need to pass any type object around but you do have full type infos at hand. This change would be breaking if another non generic type exists in the same namespace where now the generic counterpart would be preferred. But this is an acceptable risk in my opinion since you can today already get conflicts if two generic types of the same name are defined in different namespaces. This would be only a variation of this issue. When you do think about this further you can add more features like to trace the exception in your Dispose method if the method is left with an exception with that little trick I did write some time ago. You can think of tracing as a super fast and configurable switch to write data to an output destination or to execute alternative actions. With such an infrastructure you can e.g. Reconfigure tracing at run time. Take a memory dump when a specific method is left with a specific exception. Throw an exception when a specific trace statement is hit (useful for testing error conditions). Execute a passed delegate which e.g. dumps additional state when enabled. Write data to an in memory ring buffer and dump it when specific events do occur (e.g. method is left with an exception, triggered from outside). Write data to an output device. …. This stuff is really useful to have when your code is in production on a mission critical server and you need to find the root cause of sporadic crashes of your application. It could be a buggy graphics card driver which throws access violations into your application (ok with .NET 4 not anymore except if you enable a compatibility flag) where you would like to have a minidump or you have reached after two weeks of operation a state where you need a full memory dump at a specific point in time in the middle of an transaction. At my older machine I do get with this super fast approach 50 million traces/s when tracing is disabled. When I do know that tracing is enabled for this type I can walk the stack by using StackFrameHelper.GetStackFramesInternal to check further if a specific action or output device is configured for this method which is about 2-3 times faster than the regular StackTrace class. Even with one String.Format I am down to 3 million traces/s so performance is not so important anymore since I do want to do something now. The CallerMemberName feature of the C# 5 compiler is nice but I would have preferred to get direct access to the MethodHandle and not to the stringified version of it. But I really would like to see a CallerType attribute implemented to fill in the generic type argument of the call site to augment the static CLR type data with run time data.

    Read the article

  • Domains with similar names and legal issues

    - by abel
    I recently purchased one of those domain names like del.icio.us. While registering I found that delicious.com was being used. Argument: I found that delicious.com belonged to the same category as my to-be website. It served premium delicious dishes. Counter Argument: My to-be domain though belonging to the same category, specialized in serving free but delicious dishes or in giving out links(affiliate) to other sites serving premium delicious dishes. Additional Counter Arguments: 1.delicious.com was not in English. 2.the del.icio.us in my domain name though having the same spelling, is not going to be used in the same fashion. For eg.(this may not make sense, because the names have been changed)the d in delicious on my website actually stands for the greek letter Delta(?/d) and since internationalized domains are still not easily typable, I am going for the english equivalent.The prefix holds importance for the theme of the service which my website intends to offer. My Question: Can I use the domain name del.icio.us for my website? How are these kinds of matters dealt? (The domain names used are fictitious. And I have already registered the domain but have not started using it.I chanced upon this domain name because it was short, easy to remember and suited the theme of my website.)

    Read the article

  • Domains with similar names and issues

    - by abel
    I recently purchased one of those domain names like del.ico.us. While registering I found that delicious.com was being used. Argument: I found that delicious.com belonged to the same category as my to-be website. It served premium delicious dishes. Counter Argument: My to-be domain though belonging to the same category, specialized in serving free but delicious dishes or in giving out links(affiliate) to other sites serving premium delicious dishes. Additional Counter Arguments: 1.delicious.com was not in English. 2.the del.icio.us in my domain name though having the same spelling, is not going to be used in the same fashion. For eg.(this may not make sense, because the names have been changed)the d in delicious on my website actually stands for the greek letter Delta(?/d) and since internationalized domains are still not easily typable, I am going for the english equivalent.The prefix holds importance for the theme of the service which my website intends to offer. My Question: Can I use the domain name del.icio.us for my website? How are these kinds of matters dealt? (The domain names used are fictitious. And I have already registered the domain but have not started using it.I chanced upon this domain name because it was short, easy to remember and suited the theme of my website.)

    Read the article

  • How to mount an external HDD?

    - by Slash
    I have Ubuntu Linux 12.04 version the latest right now.I want to mount an external HDD NTFS 1TB.I have followed many guides but still no success.The error I'm getting is this: Failed to read last sector (1953523119): Invalid argument HINTS: Either the volume is a RAID/LDM but it wasn't setup yet, or it was not setup correctly (e.g. by not using mdadm --build ...), or a wrong device is tried to be mounted, or the partition table is corrupt (partition is smaller than NTFS), or the NTFS boot sector is corrupt (NTFS size is not valid). Failed to mount '/dev/sdb1': Invalid argument The device '/dev/sdb1' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around? Using Storage Device MAnager i get this error:Error mounting: mount exited with exit code 1: helper failed with: mount: only root can mount /dev/sdb1 on /media/Skliros_Diskos {external disk name} When I use sudo fdisk -l, this is the output: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e0bc6 Device Boot Start End Blocks Id System /dev/sda1 * 2048 618854399 309426176 83 Linux /dev/sda2 618856446 625141759 3142657 5 Extended /dev/sda5 618856448 625141759 3142656 82 Linux swap / Solaris Disk /dev/sdb: 1000.2 GB, 1000202043392 bytes 255 heads, 63 sectors/track, 121600 cylinders, total 1953519616 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0002093a Device Boot Start End Blocks Id System /dev/sdb1 2048 1953525167 976761560 7 HPFS/NTFS/exFAT

    Read the article

  • When to use http status code 404

    - by Sybiam
    I am working on a project and after arguing with people at work for about more than a hour. I decided to know what people on stack-exchange might say. We're writing an API for a system, there is a query that should return a tree of Organization or a tree of Goals. The tree of Organization is the organization in which the user is present, In other words, this tree should always exists. In the organization, a tree of goal should be always present. (that's where the argument started). In case where the tree doesn't exist, my co-worker decided that it would be right to answer response with status code 200. And then started asking me to fix my code because the application was falling apart when there is no tree. I'll try to spare flames and fury. I suggested to raise a 404 error when there is no tree. It would at least let me know that something is wrong. When using 200, I have to add special check to my response in the success callback to handle errors. I'm expecting to receive an object, but I may actually receive an empty response because nothing is found. It sounds totally fair to mark the response as a 404. And then war started and I got the message that I didn't understand HTTP status code schema. So I'm here and asking what's wrong with 404 in this case? I even got the argument "It found nothing, so it's right to return 200". I believe that it's wrong since the tree should be always present. If we found nothing and we are expecting something, it should be a 404. Extra Also, I believe the best answer to the problem is to create default objects when organizations are created, having no tree shouldn't be a valid case and should be seen as an undefined behavior. There is no way an account can be used without both trees. For that reasons, they should be always present.

    Read the article

  • Reboot failure after upgrade from 8.04 LTS to 10.04 LTS

    - by Alan Fietz
    I bought our computer from Freegeeks with Ubuntu 8.04 installed. I upgraded from Ubuntu 8.04 to 10.04 on Thursday November 10. I have an ASUS P4P800SE with dual Intel P4@3GHZ. Installation messages were: - Error loading Nautilus config info - Replaced customied /etc/login.defs - Replaced customized /etc/dhcp3/dhclient.conf - 189 packages removed - WARNING: Failed to read mirror file When I rebooted, the usual ASUS screen appeared, then "Loading GRUB" then "starting Up..." then "starting Up..." again then a blank screen (the moniter went dormant). I rebooted, started GRUB and selected: version 10.04.3 LTS kernel 2.6.32-35 generic I got the same results. I rebooted, started GRUB and selected: kernel 2.6.24-29 generic Here's what was displayed: udevd [875]: error getting socket: Invalid argument libudev:udev_monitor_new_from_netlink: error getting socket: Invalid argument Segmentation fault **Gave up waiting for root device** Common problems - Boot args (cat/proc/cmdline) - Check root delay - check root - Missing modules (cat/pro/modules; **Alert! /dev/disk/by_vvid/c59c6361 etc... does not exist. Dropping to a shell.** Then Busybox v1.13.3 started with the following prompt (?) (initramfs) _ But my typing did not appear on the screen. It appears the hard drive cannot be found. Any suggestion on how to remedy this? Thank you.

    Read the article

  • How to fix boot and mount failed drops to initramfs prompt in Ubuntu 12.04?

    - by msPeachy
    Ubuntu partition does not boot. This started after a power interruption during system boot. The next time I boot, I encountered the following error message: mount: mounting /dev/disk/by-uuid/3f7f5cd9d-6ea3-4da7-b5ec-**** on /root failed: Invalid argument mount: mounting /sys on /root/sys failed: No such file or directory mount: mounting /dev on /root/dev failed: No such file or directory mount: mounting /sys on /root/sys failed: No such file or directory mount: mounting /proc on /root/proc failed: No such file or directory Target file system doesn't have /sbin/init. No init found. Try passing init= bootarg. Busybox v1.18.5 (Ubuntu 1:1.18.5-1ubuntu4) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs) _ I've searched for similar posts here and most of the recommended solution is to reboot to the Ubuntu LiveCD. That's another problem because I cannot boot to a LIVEUSB, this is the error message I get when booting to a LiveUSB: Busybox v1.18.5 (Ubuntu 1:1.18.5-1ubuntu4) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs) mount: mounting /dev/sda2 on /isodevice failed: Invalid argument Could not find the ISO /ubuntu-12.04-desktop-i386.iso. This could also happen if the file system is not clean because of an operating system crash, an interrupted boot process, an improper shutdown, or unplugging of a removable device without first unmounting or ejecting it. To fix this, simply reboot into Windows, let it fully start, log in, run 'chkdsk /r', then gracefully shut down and reboot back into Windows. After this you should be able to reboot again and resume the installation. I cannot boot into Windows because I don't have a Windows partition. Do I have to install Windows to fix this problem? Is there a way to fix this in the (initramfs) prompt? Please help. Thank you!

    Read the article

  • Having issues using fdisk and GParted with Lexar 64GB USB, but reading and writing is fine

    - by MetaDark
    I have recently bought a new 64GB Lexar USB with the long model name of LJDV10-64G-000-106, and I am having issues partitioning it. I am able to mount, read and write to the USB without any issues but whenever I try to partition it with GParted it doesn't show up in the dropdown. Also while using the command sudo fdisk -l I receive the error fdisk: unable to seek on /dev/sdc: Invalid argument. This is a brand new USB so I am not sure why I am having these issues, especially since the device is functioning perfectly with read/write. I have tried reformatting it on a windows machine but that does not seem to do anything. For those who want a visual my USB looks almost exactly like But 64GB rather than 8GB Edit: I have just ran GParted from the terminal and I am getting a similar error, but it may give more information on the issue. Could not determine physical sector size for /dev/sdc. Using the logical sector size (512). Invalid argument during seek for read on /dev/sdc Also clearing my USB with /dev/zero fails with the the message $ sudo dd if=/dev/zero of=/dev/sdc bs=1M dd: writing `/dev/sdc': No space left on device 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00167254 s, 0.0 kB/s

    Read the article

  • What is the difference between Callback<T> and Java 8's Supplier<T>?

    - by Dan Pantry
    I've been switching over to Java from C# after some recommendations from some over at CodeReview. So, when I was looking into LWJGL, one thing I remembered was that every call to Display must be executed on the same thread that the Display.create() method was invoked on. Remembering this, I whipped up a class that looks a bit like this. public class LwjglDisplayWindow implements DisplayWindow { private final static int TargetFramesPerSecond = 60; private final Scheduler _scheduler; public LwjglDisplayWindow(Scheduler displayScheduler, DisplayMode displayMode) throws LWJGLException { _scheduler = displayScheduler; Display.setDisplayMode(displayMode); Display.create(); } public void dispose() { Display.destroy(); } @Override public int getTargetFramesPerSecond() { return TargetFramesPerSecond; } @Override public Future<Boolean> isClosed() { return _scheduler.schedule(() -> Display.isCloseRequested()); } } While writing this class you'll notice that I created a method called isClosed() that returns a Future<Boolean>. This dispatches a function to my Scheduler interface (which is nothing more than a wrapper around an ScheduledExecutorService. While writing the schedule method on the Scheduler I noticed that I could either use a Supplier<T> argument or a Callable<T> argument to represent the function that is passed in. ScheduledExecutorService didn't contain an override for Supplier<T> but I noticed that the lambda expression () -> Display.isCloseRequested() is actually type compatible with both Callable<bool> and Supplier<bool>. My question is, is there a difference between those two, semantically or otherwise - and if so, what is it, so I can adhere to it?

    Read the article

  • Mono is frequently used to say "Yes, .NET is cross-platform". How valid is that claim?

    - by Thorbjørn Ravn Andersen
    In What would you choose for your project between .NET and Java at this point in time ? I say that I would consider the "Will you always deploy to Windows?" the single most important decision to make up front in a new web project, and if the answer is "no", I would recommend Java instead of .NET. A very common counter-argument is that "If we ever want to run on Linux/OS X/Whatever, we'll just run Mono", which is a very compelling argument on the surface, but I don't agree for several reasons. OpenJDK and all the vendor supplied JVM's have passed the official Sun TCK ensuring things work correctly. I am not aware of Mono passing a Microsoft TCK. Mono trails the .NET releases. What .NET-level is currently fully supported? Does all GUI elements (WinForms?) work correctly in Mono? Businesses may not want to depend on Open Source frameworks as the official plan B. I am aware that with the new governance of Java by Oracle, the future is unsafe, but e.g. IBM provides JDK's for many platforms, including Linux. They are just not open sourced. So, under which circumstances is Mono a valid business strategy for .NET-applications?

    Read the article

  • Mono is frequently used to say "Yes, .NET is cross-platform". How valid is that claim?

    - by Thorbjørn Ravn Andersen
    In What would you choose for your project between .NET and Java at this point in time? I say that I would consider the "Will you always deploy to Windows?" the single most important (EDIT: technical) decision to make up front in a new web project, and if the answer is "no", I would recommend Java instead of .NET. A very common counter-argument is that "If we ever want to run on Linux/OS X/Whatever, we'll just run Mono", which is a very compelling argument on the surface, but I don't agree for several reasons. OpenJDK and all the vendor supplied JVM's have passed the official Sun TCK ensuring things work correctly. I am not aware of Mono passing a Microsoft TCK. Mono trails the .NET releases. What .NET-level is currently fully supported? Does all GUI elements (WinForms?) work correctly in Mono? Businesses may not want to depend on Open Source frameworks as the official plan B. I am aware that with the new governance of Java by Oracle, the future is unsafe, but e.g. IBM provides JDK's for many platforms, including Linux. They are just not open sourced. So, under which circumstances is Mono a valid business strategy for .NET-applications? Edit: Mark H summarized it as: "If the claim is that "I have a windows application written in .NET, it should run on mono", then not, it's not a valid claim - but Mono has made efforts to make porting such applications simpler.".

    Read the article

  • Unable to mount NTFS Partition after resizing

    - by sam
    I was having only 15 GB space allocated to LINUX. I wanted to have more space available to linux. So I just re sized one of my ntfs partition using GParted. But after resizing I am not able to open the partition neither in Ubuntu nor in windows. OS: Dual Boot Win7/Ubuntu 10.10 The error message i get is the following: Error mounting: mount exited with exit code 12: Failed to read last sector (395458824): Invalid argument HINTS: Either the volume is a RAID/LDM but it wasn't setup yet, or it was not setup correctly (e.g. by not using mdadm --build ...), or a wrong device is tried to be mounted, or the partition table is corrupt (partition is smaller than NTFS), or the NTFS boot sector is corrupt (NTFS size is not valid). Failed to mount '/dev/sda5': Invalid argument The device '/dev/sda5' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?

    Read the article

  • Java's Object.wait method with nanoseconds: Is this a joke or am I missing something

    - by Krumia
    I was checking out the Java API source code (Java 8) just out of curiosity. And I found this in java/lang/Object.java. There are three methods named wait: public final native void wait(long timeout): This is the core of all wait methods, which has a native implementation. public final void wait(): Just calls wait(0). And then there is public final void wait(long timeout, int nanos). The JavaDoc for the particular method tells me that, This method is similar to the wait method of one argument, but it allows finer control over the amount of time to wait for a notification before giving up. The amount of real time, measured in nanoseconds, is given by: 1000000*timeout+nanos But this is how the methods achieves "finer control over the amount of time to wait": if (nanos >= 500000 || (nanos != 0 && timeout == 0)) { timeout++; } wait(timeout); So this method basically does a crude rounding up of nanoseconds to milliseconds. Not to mention that anything below 500000ns/0.5ms will be ignored. Is this piece of code bad/unnecessary code, or am I missing some unseen virtue of declaring this method, and it's no argument cousin as the way they are?

    Read the article

  • Can DVCSs enforce a specific workflow?

    - by dukeofgaming
    So, I have this little debate at work where some of my colleagues (which are actually in charge of administrating our Perforce instance) say that workflows are strictly a process thing, and that the tools that we use (in this case, the version control system) have no take on it. In otherwords, the point that they make is that workflows (and their execution) are tool-agnostic. My take on this is that DVCSs are better at encouraging people in more flexible and well-defined ways, because of the inherent branching occurring in the background (anonymous branches), and that you can enforce workflows through the deployment model you establish (e.g. pull requests through repository management, dictator/liutenant roles with their machines setup as servers, etc.) I think in CVCSs you have to enforce workflows through policies and policing, because there is only one way to share the code, while in DVCSs you just go with the flow based on the infrastructure/permissions that were setup for you. Even when I have provided the earlier arguments, I'm still unable to fully convince them. Am I saying something the wrong way?, if not, what other arguments or examples do you think would be useful to convince them? Edit: The main workflow we have been focusing on, because it makes sense to both sides is the Dictator/Lieutenants workflow: My argument for this particular workflow is that there is no pipeline in a CVCS (because there is just sharing work in a centralized way), whereas there is an actual pipeline in DVCSs depending on how you deploy read/write permissions. Their argument is that this workflow can be done through branching, and while they do this in some projects (due to policy/policing) in other projects they forbid developers from creating branches.

    Read the article

  • Can org.freedesktop.Notifications.CloseNotification(uint id) be triggered and invoked via DBus?

    - by george rowell
    ref: Close button on notify-osd? Bookmark: Can org.freedesktop.Notifications.CloseNotification(uint id) be triggered and invoked via DBus? Currently, this script dbus-monitor "interface='org.freedesktop.Notifications'" | \ grep --line-buffered "member=Notify" | \ sed -u -e 's/.*/killall notify-osd/g' | \ bash will kill all pending notifications. It would be better to finesse the specific target OSD notification to cancel, by using org.freedesktop.Notifications.CloseNotification(uint id). Is there an interface method that can put this on (in?) the DBus to fire when a particular notify event occurs? The method will need to get the notify PID to use as the argument for CloseNotification(uint id). Alternatively, qdbus org.freedesktop.Notifications \ /org/freedesktop/Notifications \ org.freedesktop.Notifications.CloseNotification(uint id) could be used from the shell, if the (uint id) argument could be determined. The actual command syntax would use an integer in place of (uint id). Perhaps a better question to ask first might be "How is the DBus address for a notification found?". In hindsight the previous question "How is the (uint id) for a notification found?" is rhetorical! This previous answer: http://askubuntu.com/a/186311/89468 provided details so either method below can be used: gdbus call --session --dest org.freedesktop.DBus \ --object-path / \ --method org.freedesktop.DBus.GetConnectionUnixProcessID :1.16 returning: (uint32 8957,) or qdbus --literal --session org.freedesktop.DBus / \ org.freedesktop.DBus.GetConnectionUnixProcessID :1.16 returning: 8957

    Read the article

  • Prefer class members or passing arguments between internal methods?

    - by geoffjentry
    Suppose within the private portion of a class there is a value which is utilized by multiple private methods. Do people prefer having this defined as a member variable for the class or passing it as an argument to each of the methods - and why? On one hand I could see an argument to be made that reducing state (ie member variables) in a class is generally a good thing, although if the same value is being repeatedly used throughout a class' methods it seems like that would be an ideal candidate for representation as state for the class to make the code visibly cleaner if nothing else. Edit: To clarify some of the comments/questions that were raised, I'm not talking about constants and this isn't relating to any particular case rather just a hypothetical that I was talking to some other people about. Ignoring the OOP angle for a moment, the particular use case that I had in mind was the following (assume pass by reference just to make the pseudocode cleaner) int x doSomething(x) doAnotherThing(x) doYetAnotherThing(x) doSomethingElse(x) So what I mean is that there's some variable that is common between multiple functions - in the case I had in mind it was due to chaining of smaller functions. In an OOP system, if these were all methods of a class (say due to refactoring via extracting methods from a large method), that variable could be passed around them all or it could be a class member.

    Read the article

  • In MVC, why can't a model create a view?

    - by MUY Belgium
    I have a web application written in Perl with a controller, some "views" and some "Models". Each "Model" is corresponding to one "View". The controller (one file) creates an Model object corresponding to each view (view is a CGI argument) then retrieve the view from the module it has just created. Indeed, this should be bad thing but can you argue a bit more about it. My first idea was that since the object "Model" depends upon the "view", then the "model" is actually a view. But also the fact that ALL the cgi parameters are passed to the Model causes the "Model" to become not truelly a view but to loose all interest, since it is only related to the current implementation of the web apps. On other words, that the "Model" keep model but loose its "comprehensiveness" ("Model" is not easily understandable). I'm am quite new in project analysis, so please do not be too harsh. Why is this bad? I have made a prototype with the main structures I have understood of this web application, made as short as possible. #Model.pm package Model; import { # this requires an attribute called "view" # and this require an argument which is the cgi params } ... #View1.pm package View1; ... #Model1.pm package ModelView1 ; base Model; use View1; sub new { my $class = shift; my $arg = shift; Model::DoSomething($arg); $self->view = new View1($arg); ... } #controller.cgi my $model = 0; ... $model = new Model1( cgi_param => params() ); #there is severall models here ... print $model->get_view()->get_html();

    Read the article

  • Best Practices PHP mvc routing

    - by dukeofweatherby
    I have a custom MVC framework that is in a constant state of evolution. There's a long standing debate with a co-worker how the routing should work. Considering the following directory structure: /core/Router.php /mvc/Controllers/{Public controllers} /mvc/Controllers/Private/{Controllers requiring valid user} /mvc/Controllers/CMS/{Controllers requiring valid user and specific roles} The question is: "Where should the current User's authentication be established: in the Router, when choosing which controller/directory to load, or in each Controller?" My argument is that when authenticating in the Router, an Error Controller is created instead of the requested Controller, informing you of your mishap; And the directory structure clearly indicates the authentication required. His argument is that a router should do routing and only routing. Leave it to the Controller to handle it on a case by case basis. This is more modular and allows more flexibility should changes need to be made by the router. PHP MVC - Custom Routing Mechanism alluded to it, but the topic was of a different nature. Alternative suggestions would be welcomed as well.

    Read the article

  • Language Design: Are languages like Python and CoffeeScript really more comprehensible?

    - by kittensatplay
    The "Verbally Readable !== Quicker Comprehension" argument on http://ryanflorence.com/2011/case-against-coffeescript/ is really potent and interesting. I and I'm sure others would be very interested in evidence arguing against this. There's clear evidence for this and I believe it. People naturally think in images, not words, so we should be designing languages that aren't similar to human language like English, French, whatever. Being "readable" is quicker comprehension. Most articles on Wikipedia are not readable as they are long, boring, dry, sluggish and very very wordy. Because Wikipedia documents a ton of info, it is not especially helpful when compared to sites with more practical, useful and relevant info. Languages like Python and CoffeScript are "verbally readable" in that they are closer to English syntax. Having programmed firstly and mainly in Python, I'm not so sure this is really a good thing. The second interesting argument is that CoffeeScript is an intermediator, a step between two ends, which may increase the chance of bugs. While CoffeeScript has other practical benefits, this question specifically requests evidence showing support for the counter-case of language "readability"

    Read the article

  • GLSL custom interpolation filter

    - by Cyan
    I'm currently building a fragment shader which is using several textures to render the final pixel color. The textures are not really textures, they are in fact "input data" to be used in the formula to generate the final color. The problem I've got is that the texture are getting bi-linear-filtered, and therefore the input data as well. This results in many unwanted side-effects, especially when final rendered texture is "zoomed" compared to original resolution. Removing the side effect is a complex task, and only result in "average" rendering. I was thinking : well, all my problems seems to come from the "default" bi-linear filtering on these input data. I can't move to GL_NEAREST either, since it would create "blocky" rendering. So i guess the better way to proceed is to be fully in charge of the interpolation. For this to work, i would need the input data at their "natural" resolution (so that means 4 samples), and a relative position between the sampled points. Is that possible, and if yes, how ? [EDIT] Since i started this question, i found this internet entry, which seems to (mostly) answer my needs. http://www.gamerendering.com/2008/10/05/bilinear-interpolation/ One aspect of the solution worry me though : the dimensions of the texture must be provided in an argument. It seems there is no way to "find this information transparently". Adding an argument into the rendering pipeline is unwelcomed though, since it's not under my responsibility, and translates into adding complexity for others.

    Read the article

  • recovering raid 0 hard disk

    - by Hiawatha
    I bumped to a huge (for me) problem. I was running dual boot system (win 7 / linux) and at some point I decided to test fedora ( I am new in Linux ). My hard disk conf: 3 hard disks each 1 TB, 2 set to raid 0 with windows running on it and 1 for linux. After installing it from live usb I found out that windows 7 is not in grub anymore and while booting shows raid error. I installed back Ubuntu and ran Disk Utility and checked now I have one disks (raid 0) failed (READ) error. First has 5 bad sectors and second has 1 bad sector. And now I dont know what to do and how to repair. further I dont know which data i could provide to get help. I tried ntfsfix and got this output: Mounting volume... NTFS signature is missing. FAILED Attempting to correct errors... NTFS signature is missing. FAILED Failed to startup volume: Invalid argument NTFS signature is missing. Trying the alternate boot sector Unrecoverable error Volume is corrupt. You should run chkdsk. #sudo ntfs-3g -o force,rw /dev/sdb /media/windows NTFS signature is missing. Failed to mount '/dev/sdb': Invalid argument The device '/dev/sdb' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?

    Read the article

  • Should developers do their own software releases (if there is a prod support team in place)?

    - by leora
    I know there are going to always be differences depending on the particular size, staff etc, but i wanted to get feedback in general around: In an environment where you have a production support team doing first line support and release management, is it better to simply have developers manage their own releases instead? In this case, its internal software at an insurance company but the question should be valid at any company, size, etc I think. Currently, we have our production team do releases but there is an argument that its inefficient and that if you allowed developers the ability to do it, they will focus more on making it simple and efficient and avoid basically passing on scripts, etc to run to another team. The counter argument is that if you don't have a check and balance, you could get a software team (or an individual) that doesn't a very hacky job about getting their software out there (making on the fly changes, not documenting the process, etc) and that by forcing the prod support team to do the actual release, it enforces consistency and proper checks and balances. I know this is not a black or white issue but I wanted to see what folks thought on this so the discipline and consistency is there but without the feeling that an inefficient process is in place.

    Read the article

  • ???????,???????????

    - by ??-Oracle
    Normal 0 7.8 ? 0 2 false false false EN-US ZH-CN X-NONE ??: 1.???????; 2.??????????????????; 3.????????????????,??????????,????????; ??: ?????????: SQL> create table test_b (id number(10)) tablespace bbb; Table created. SQL> insert into test_b values (1); 1 row created. SQL> commit; NAME --------------------------------------------------------------------------------     FILE# ---------- /opt/oracle/oradata/R11203/aaa.dbf        10 /opt/oracle/oradata/R11203/bbb.dbf        11 11 rows selected. SQL> host ??????,?????? bash-4.2$ ls -al /opt/oracle/oradata/R11203/bbb.dbf -rw-rw----   1 oracle     dba        10493952 Apr  4 09:53 /opt/oracle/oradata/R11203/bbb.dbf bash-4.2$ mv /opt/oracle/oradata/R11203/bbb.dbf /opt/oracle/oradata/R11203/bbb.dbf.bak bash-4.2$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Fri Apr 4 09:55:03 2014 Copyright (c) 1982, 2011, Oracle.  All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> alter tablespace bbb read only; alter tablespace bbb read only * ERROR at line 1: ORA-01116: error in opening database file 11 ORA-01110: data file 11: '/opt/oracle/oradata/R11203/bbb.dbf' ORA-27041: unable to open file HPUX-ia64 Error: 2: No such file or directory Additional information: 3 SQL> shutdown immediate; ORA-01116: error in opening database file 11 ORA-01110: data file 11: '/opt/oracle/oradata/R11203/bbb.dbf' ORA-27041: unable to open file HPUX-ia64 Error: 2: No such file or directory Additional information: 3 SQL> select status from v$instance; STATUS ------------ OPEN SQL> alter system switch logfile; System altered. SQL> / System altered. SQL> / System altered. SQL>/ System altered. SQL> / System altered. SQL> / System altered. SQL> ?? SQL> shutdown immediate; ORA-01116: error in opening database file 11 ORA-01110: data file 11: '/opt/oracle/oradata/R11203/bbb.dbf' ORA-27041: unable to open file HPUX-ia64 Error: 2: No such file or directory Additional information: 3 SQL> shutdown abort; ORACLE instance shut down. ???????mount?? SQL> startup mount; ORACLE instance started. Total System Global Area  329859072 bytes Fixed Size                  2182336 bytes Variable Size             285213504 bytes Database Buffers           37748736 bytes Redo Buffers                4714496 bytes Database mounted. ??alter database create datafile <> as ....???,???????????: SQL> alter database create datafile 11; Database altered. ???????redo log????????? SQL> recover datafile 11; Media recovery complete. SQL> alter database open; Database altered. SQL> select * from test_b;        ID ----------         1 DefSemiHidden="true" DefQFormat="false" DefPriority="99" LatentStyleCount="267" UnhideWhenUsed="false" QFormat="true" Name="Normal"/ UnhideWhenUsed="false" QFormat="true" Name="heading 1"/ UnhideWhenUsed="false" QFormat="true" Name="Title"/ UnhideWhenUsed="false" QFormat="true" Name="Subtitle"/ UnhideWhenUsed="false" QFormat="true" Name="Strong"/ UnhideWhenUsed="false" QFormat="true" Name="Emphasis"/ UnhideWhenUsed="false" Name="Table Grid"/ UnhideWhenUsed="false" QFormat="true" Name="No Spacing"/ UnhideWhenUsed="false" Name="Light Shading"/ UnhideWhenUsed="false" Name="Light List"/ UnhideWhenUsed="false" Name="Light Grid"/ UnhideWhenUsed="false" Name="Medium Shading 1"/ UnhideWhenUsed="false" Name="Medium Shading 2"/ UnhideWhenUsed="false" Name="Medium List 1"/ UnhideWhenUsed="false" Name="Medium List 2"/ UnhideWhenUsed="false" Name="Medium Grid 1"/ UnhideWhenUsed="false" Name="Medium Grid 2"/ UnhideWhenUsed="false" Name="Medium Grid 3"/ UnhideWhenUsed="false" Name="Dark List"/ UnhideWhenUsed="false" Name="Colorful Shading"/ UnhideWhenUsed="false" Name="Colorful List"/ UnhideWhenUsed="false" Name="Colorful Grid"/ UnhideWhenUsed="false" Name="Light Shading Accent 1"/ UnhideWhenUsed="false" Name="Light List Accent 1"/ UnhideWhenUsed="false" Name="Light Grid Accent 1"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 1"/ UnhideWhenUsed="false" QFormat="true" Name="List Paragraph"/ UnhideWhenUsed="false" QFormat="true" Name="Quote"/ UnhideWhenUsed="false" QFormat="true" Name="Intense Quote"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 1"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1"/ UnhideWhenUsed="false" Name="Dark List Accent 1"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 1"/ UnhideWhenUsed="false" Name="Colorful List Accent 1"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 1"/ UnhideWhenUsed="false" Name="Light Shading Accent 2"/ UnhideWhenUsed="false" Name="Light List Accent 2"/ UnhideWhenUsed="false" Name="Light Grid Accent 2"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 2"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 2"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2"/ UnhideWhenUsed="false" Name="Dark List Accent 2"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 2"/ UnhideWhenUsed="false" Name="Colorful List Accent 2"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 2"/ UnhideWhenUsed="false" Name="Light Shading Accent 3"/ UnhideWhenUsed="false" Name="Light List Accent 3"/ UnhideWhenUsed="false" Name="Light Grid Accent 3"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 3"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 3"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3"/ UnhideWhenUsed="false" Name="Dark List Accent 3"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 3"/ UnhideWhenUsed="false" Name="Colorful List Accent 3"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 3"/ UnhideWhenUsed="false" Name="Light Shading Accent 4"/ UnhideWhenUsed="false" Name="Light List Accent 4"/ UnhideWhenUsed="false" Name="Light Grid Accent 4"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 4"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 4"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4"/ UnhideWhenUsed="false" Name="Dark List Accent 4"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 4"/ UnhideWhenUsed="false" Name="Colorful List Accent 4"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 4"/ UnhideWhenUsed="false" Name="Light Shading Accent 5"/ UnhideWhenUsed="false" Name="Light List Accent 5"/ UnhideWhenUsed="false" Name="Light Grid Accent 5"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 5"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 5"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5"/ UnhideWhenUsed="false" Name="Dark List Accent 5"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 5"/ UnhideWhenUsed="false" Name="Colorful List Accent 5"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 5"/ UnhideWhenUsed="false" Name="Light Shading Accent 6"/ UnhideWhenUsed="false" Name="Light List Accent 6"/ UnhideWhenUsed="false" Name="Light Grid Accent 6"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 6"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 6"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6"/ UnhideWhenUsed="false" Name="Dark List Accent 6"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 6"/ UnhideWhenUsed="false" Name="Colorful List Accent 6"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 6"/ UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis"/ UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis"/ UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference"/ UnhideWhenUsed="false" QFormat="true" Name="Intense Reference"/ UnhideWhenUsed="false" QFormat="true" Name="Book Title"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:????; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.5pt; mso-bidi-font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-font-kerning:1.0pt;}

    Read the article

  • Help with Java Program for Prime numbers

    - by Ben
    Hello everyone, I was wondering if you can help me with this program. I have been struggling with it for hours and have just trashed my code because the TA doesn't like how I executed it. I am completely hopeless and if anyone can help me out step by step, I would greatly appreciate it. In this project you will write a Java program that reads a positive integer n from standard input, then prints out the first n prime numbers. We say that an integer m is divisible by a non-zero integer d if there exists an integer k such that m = k d , i.e. if d divides evenly into m. Equivalently, m is divisible by d if the remainder of m upon (integer) division by d is zero. We would also express this by saying that d is a divisor of m. A positive integer p is called prime if its only positive divisors are 1 and p. The one exception to this rule is the number 1 itself, which is considered to be non-prime. A positive integer that is not prime is called composite. Euclid showed that there are infinitely many prime numbers. The prime and composite sequences begin as follows: Primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, … Composites: 1, 4, 6, 8, 9, 10, 12, 14, 15, 16, 18, 20, 21, 22, 24, 25, 26, 27, 28, … There are many ways to test a number for primality, but perhaps the simplest is to simply do trial divisions. Begin by dividing m by 2, and if it divides evenly, then m is not prime. Otherwise, divide by 3, then 4, then 5, etc. If at any point m is found to be divisible by a number d in the range 2 d m-1, then halt, and conclude that m is composite. Otherwise, conclude that m is prime. A moment’s thought shows that one need not do any trial divisions by numbers d which are themselves composite. For instance, if a trial division by 2 fails (i.e. has non-zero remainder, so m is odd), then a trial division by 4, 6, or 8, or any even number, must also fail. Thus to test a number m for primality, one need only do trial divisions by prime numbers less than m. Furthermore, it is not necessary to go all the way up to m-1. One need only do trial divisions of m by primes p in the range 2 p m . To see this, suppose m 1 is composite. Then there exist positive integers a and b such that 1 < a < m, 1 < b < m, and m = ab . But if both a m and b m , then ab m, contradicting that m = ab . Hence one of a or b must be less than or equal to m . To implement this process in java you will write a function called isPrime() with the following signature: static boolean isPrime(int m, int[] P) This function will return true or false according to whether m is prime or composite. The array argument P will contain a sufficient number of primes to do the testing. Specifically, at the time isPrime() is called, array P must contain (at least) all primes p in the range 2 p m . For instance, to test m = 53 for primality, one must do successive trial divisions by 2, 3, 5, and 7. We go no further since 11 53 . Thus a precondition for the function call isPrime(53, P) is that P[0] = 2 , P[1] = 3 , P[2] = 5, and P[3] = 7 . The return value in this case would be true since all these divisions fail. Similarly to test m =143 , one must do trial divisions by 2, 3, 5, 7, and 11 (since 13 143 ). The precondition for the function call isPrime(143, P) is therefore P[0] = 2 , P[1] = 3 , P[2] = 5, P[3] = 7 , and P[4] =11. The return value in this case would be false since 11 divides 143. Function isPrime() should contain a loop that steps through array P, doing trial divisions. This loop should terminate when 2 either a trial division succeeds, in which case false is returned, or until the next prime in P is greater than m , in which case true is returned. Function main() in this project will read the command line argument n, allocate an int array of length n, fill the array with primes, then print the contents of the array to stdout according to the format described below. In the context of function main(), we will refer to this array as Primes[]. Thus array Primes[] plays a dual role in this project. On the one hand, it is used to collect, store, and print the output data. On the other hand, it is passed to function isPrime() to test new integers for primality. Whenever isPrime() returns true, the newly discovered prime will be placed at the appropriate position in array Primes[]. This process works since, as explained above, the primes needed to test an integer m range only up to m , and all of these primes (and more) will already be stored in array Primes[] when m is tested. Of course it will be necessary to initialize Primes[0] = 2 manually, then proceed to test 3, 4, … for primality using function isPrime(). The following is an outline of the steps to be performed in function main(). • Check that the user supplied exactly one command line argument which can be interpreted as a positive integer n. If the command line argument is not a single positive integer, your program will print a usage message as specified in the examples below, then exit. • Allocate array Primes[] of length n and initialize Primes[0] = 2 . • Enter a loop which will discover subsequent primes and store them as Primes[1] , Primes[2], Primes[3] , ……, Primes[n -1] . This loop should contain an inner loop which walks through successive integers and tests them for primality by calling function isPrime() with appropriate arguments. • Print the contents of array Primes[] to stdout, 10 to a line separated by single spaces. In other words Primes[0] through Primes[9] will go on line 1, Primes[10] though Primes[19] will go on line 2, and so on. Note that if n is not a multiple of 10, then the last line of output will contain fewer than 10 primes. Your program, which will be called Prime.java, will produce output identical to that of the sample runs below. (As usual % signifies the unix prompt.) % java Prime Usage: java Prime [PositiveInteger] % java Prime xyz Usage: java Prime [PositiveInteger] % java Prime 10 20 Usage: java Prime [PositiveInteger] % java Prime 75 2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97 101 103 107 109 113 127 131 137 139 149 151 157 163 167 173 179 181 191 193 197 199 211 223 227 229 233 239 241 251 257 263 269 271 277 281 283 293 307 311 313 317 331 337 347 349 353 359 367 373 379 % 3 As you can see, inappropriate command line argument(s) generate a usage message which is similar to that of many unix commands. (Try doing the more command with no arguments to see such a message.) Your program will include a function called Usage() having signature static void Usage() that prints this message to stderr, then exits. Thus your program will contain three functions in all: main(), isPrime(), and Usage(). Each should be preceded by a comment block giving it’s name, a short description of it’s operation, and any necessary preconditions (such as those for isPrime().) See examples on the webpage.

    Read the article

< Previous Page | 584 585 586 587 588 589 590 591 592 593 594 595  | Next Page >