Search Results

Search found 18926 results on 758 pages for 'systems programming'.

Page 36/758 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Is this bad style of programming(C#) ?

    - by m0s
    Hi, so in my program I have parts where I use try catch blocks like this try { DirectoryInfo dirInfo = new DirectoryInfo(someString); //I don't know if that directory exists //I don't know if that string is valid path string... it could be anything //Some operations here } catch(Exception iDontCareWhyItFailed) { //Didn't work? great... we will say: somethings wrong, try again/next one } Of course I probably could do checks to see if the string is valid path (regex), then I would check if directory exists, then I could catch various exceptions to see why my routine failed and give more info... But in my program it's not really necessary. Now I just really need to know if this is acceptable, and what would a pro say/think about that. Thanks a lot for attention.

    Read the article

  • AIX specific socket programming query

    - by kumar_m_kiran
    Hi All, Question 1 From SUSE man pages, I get the below details for socket connect options If the initiating socket is connection-mode, then connect() shall attempt to establish a connection to the address specified by the address argument. If the connection cannot be established immediately and O_NONBLOCK is not set for the file descriptor for the socket, connect() shall block for up to an unspecified timeout interval until the connection is established. If the timeout interval expires before the connection is established, connect() shall fail and the connection attempt shall be aborted. If connect() is interrupted by a signal that is caught while blocked waiting to establish a connection, connect() shall fail and set errno to [EINTR], but the connection request shall not be aborted, and the connection shall be established asynchronously. Question : Is the above contents valid for AIX OS (especially the connection time-out, timed wait ...etc)?Because I do not see it in AIX man pages (5.1 and 5.3) Question 2 I have a client socket whose attributes are a. SO_RCVTIMEO ,SO_SNDTIMEO are set for 5 seconds. b. AF_INET and SOCK_STREAM. c. SO_LINGER with linger on and time is 5 seconds. d. SO_REUSEADDR is set. Note that the client socket is not O_NONBLOCK. Question : Now since O_NONBLOCK is not set and SO_RCVTIMEO and SO_SNDTIMEO is set for 5 seconds, does it mean a. connect in NON Blocking or Blocking? b. If blocking, is it timed blocking or "infinite" time blocking? c. If it is infinite, How do I establish a "connect" system call which is O_BLOCKING with timeout to t secs. Sorry if the questions are be very naive. Thanks in advance for your input.

    Read the article

  • Programming Quiz [closed]

    - by arin-s-rizk
    Hi one of my mates sent me this quiz see if you can guess the answers I will post mine later. In this quiz, some tasks related to the compilation process are listed. For each one of them, specify the part of the compiler that is responsible of performing it. Here are the possible answers: Lexical analyzer Parser Semantic analyzer None of the above Just fill the right choice (the number only) in the blank after each task: Checking that the parentheses in an expression are balanced _ _ _ _ _ Removing comments from the program _ _ _ _ _ Grouping input characters into "tokens" _ _ _ _ _ Reporting an error to the programmer about a missing (;) at the end of a C++ statement _ _ _ _ _ Checking if the type of the RHS (Right-Hand Side) of an assignment (=) is compatible with the LHS (Left-Hand Side) variable _ _ _ _ _ Converting the (AST) Abstract Syntax Tree into machine language _ _ _ _ _ Reporting an error about a strange character like '^' in a C++ program _ _ _ _ _ Optimizing the AST _ _ _ _ _

    Read the article

  • What programming language is FogBugz written in?

    - by Earlz
    From what I've read it appears that FogBugz was originally written in VBScript. Now apparently they use their own custom compiler and language that will translate the source code to more "accessible" languages such as PHP and (I think) C#. Is there a name for this language? What does a hello world look like in it? Is there any hope of seeing this compiler released to the public?

    Read the article

  • Programming terms - field, member, properties (C#)

    - by Petr
    Hi, I was trying to find meaning of this terms but especially due to language barrier I was not able to understand what they are used for. I assume that "field" is variable (object too?) in the class while "property" is just an object that returns specific value and cannot contain methods etc. By "member" I understand any object that is declared on the class level. But these are just my assumptions based on commented code samples where some careful programmers used "property region" etc. I would really appreciate if someone could explain it to me.

    Read the article

  • Choose Your Own Adventure : BASIC Programming

    - by theraccoonbear
    Hopefully this isn't considered too off-topic, but I guess we'll see. I'd love to find and frame a copy of this book. Years ago, in my pre-teen years, I remember reading a lot of CYOA books, and one in particular stands out in my mind as the book that started me down the path of becoming a programmer. The details are fuzzy, but what I remember was that the story involved a programmer who was held captive somewhere and was trying to escape. IIRC, each section or chapter had a short BASIC program you could could type into your computer to simulate something from the story. The one that stands out most in my mind was a very simplistic animation made with pipes, pluses, and dashes that "looked" like a metal grate that opened (sliding upward). I realize this is pretty scant information to go on, but I suspect that anyone else who read the book would immediately remember it. Maybe not, I guess we'll see. Again, my apologies if this is too far off-topic for S.O.

    Read the article

  • Objective-measures of the expressiveness of programming languages [closed]

    - by Casebash
    I am very interested in the expressiveness of different languages. Everyone who has programmed in multiple languages knows that sometimes a language allows you to express concepts which you can't express in other languages. You can have all kinds of subjective discussion about this, but naturally it would be better to have an objective measure. There do actually exist objective measures. One is Turing-Completeness, which means that a language is capable of generating any output that could be generated by following a sequential set of steps. There are also other lesser levels of expressiveness such as Finite State Automata. Now, except for domain specific languages, pretty much all modern languages are Turing complete. It is therefore natural to ask the following question: Can we can define any other formal measures of expressiveness which are greater than Turing completeness? Now of course we can't define this by considering the output that a program can generate, as Turing machines can already produce the same output that any other program can. But there are definitely different levels in what concepts can be expressed - surely no-one would argue that assembly language is as powerful as a modern object oriented language like Python. You could use your assembly to write a Python interpreter, so clearly any accurate objective measure would have to exclude this possibility. This also causes a problem with trying to define the expressiveness using the minimum number of symbols. How exactly to do so is not clear and indeed appears extremely difficult, but we can't assume that just because we don't know how to solve a problem, that nobody know how to. It is also doesn't really make sense to demand a definition of expressiveness before answering the question - after all the whole point of this question is to obtain such a definition. I think that my explanation will be clear enough for anyone with a strong theoretical background in computer science to understand what I am looking for. If you do have such a background and you disagree, please comment why, but if you don't thats probably why you don't understand the question.

    Read the article

  • regarding java programming language

    - by giri
    Hi I am java professional since last one year. I am pretty familier with core java and JSP and SERVLET technologies.Now I am hired by a telcom company where java is not used.The question I like to ask here is how to keep java enviornment around me so that I should not be unfamiler with java. As I come from company I get much time to work with java. I like to know any real time projects in java available so that I can work with java also. Please let me know... Thanks in advance

    Read the article

  • c programming Language [closed]

    - by ash89
    Write a program in C program to find the sum of the following: The input contain a sequence of two or more positive integers terminated by -1. Write a piece of code to count the ‘incidences’ in this sequence (i.e. the number of pairs of equal, adjacent numbers). For example, the following sequence contains 4 incidences: 4 2 9 9 3 7 7 7 3 3 -1

    Read the article

  • Thread placement policies on NUMA systems - update

    - by Dave
    In a prior blog entry I noted that Solaris used a "maximum dispersal" placement policy to assign nascent threads to their initial processors. The general idea is that threads should be placed as far away from each other as possible in the resource topology in order to reduce resource contention between concurrently running threads. This policy assumes that resource contention -- pipelines, memory channel contention, destructive interference in the shared caches, etc -- will likely outweigh (a) any potential communication benefits we might achieve by packing our threads more densely onto a subset of the NUMA nodes, and (b) benefits of NUMA affinity between memory allocated by one thread and accessed by other threads. We want our threads spread widely over the system and not packed together. Conceptually, when placing a new thread, the kernel picks the least loaded node NUMA node (the node with lowest aggregate load average), and then the least loaded core on that node, etc. Furthermore, the kernel places threads onto resources -- sockets, cores, pipelines, etc -- without regard to the thread's process membership. That is, initial placement is process-agnostic. Keep reading, though. This description is incorrect. On Solaris 10 on a SPARC T5440 with 4 x T2+ NUMA nodes, if the system is otherwise unloaded and we launch a process that creates 20 compute-bound concurrent threads, then typically we'll see a perfect balance with 5 threads on each node. We see similar behavior on an 8-node x86 x4800 system, where each node has 8 cores and each core is 2-way hyperthreaded. So far so good; this behavior seems in agreement with the policy I described in the 1st paragraph. I recently tried the same experiment on a 4-node T4-4 running Solaris 11. Both the T5440 and T4-4 are 4-node systems that expose 256 logical thread contexts. To my surprise, all 20 threads were placed onto just one NUMA node while the other 3 nodes remained completely idle. I checked the usual suspects such as processor sets inadvertently left around by colleagues, processors left offline, and power management policies, but the system was configured normally. I then launched multiple concurrent instances of the process, and, interestingly, all the threads from the 1st process landed on one node, all the threads from the 2nd process landed on another node, and so on. This happened even if I interleaved thread creating between the processes, so I was relatively sure the effect didn't related to thread creation time, but rather that placement was a function of process membership. I this point I consulted the Solaris sources and talked with folks in the Solaris group. The new Solaris 11 behavior is intentional. The kernel is no longer using a simple maximum dispersal policy, and thread placement is process membership-aware. Now, even if other nodes are completely unloaded, the kernel will still try to pack new threads onto the home lgroup (socket) of the primordial thread until the load average of that node reaches 50%, after which it will pick the next least loaded node as the process's new favorite node for placement. On the T4-4 we have 64 logical thread contexts (strands) per socket (lgroup), so if we launch 48 concurrent threads we will find 32 placed on one node and 16 on some other node. If we launch 64 threads we'll find 32 and 32. That means we can end up with our threads clustered on a small subset of the nodes in a way that's quite different that what we've seen on Solaris 10. So we have a policy that allows process-aware packing but reverts to spreading threads onto other nodes if a node becomes too saturated. It turns out this policy was enabled in Solaris 10, but certain bugs suppressed the mixed packing/spreading behavior. There are configuration variables in /etc/system that allow us to dial the affinity between nascent threads and their primordial thread up and down: see lgrp_expand_proc_thresh, specifically. In the OpenSolaris source code the key routine is mpo_update_tunables(). This method reads the /etc/system variables and sets up some global variables that will subsequently be used by the dispatcher, which calls lgrp_choose() in lgrp.c to place nascent threads. Lgrp_expand_proc_thresh controls how loaded an lgroup must be before we'll consider homing a process's threads to another lgroup. Tune this value lower to have it spread your process's threads out more. To recap, the 'new' policy is as follows. Threads from the same process are packed onto a subset of the strands of a socket (50% for T-series). Once that socket reaches the 50% threshold the kernel then picks another preferred socket for that process. Threads from unrelated processes are spread across sockets. More precisely, different processes may have different preferred sockets (lgroups). Beware that I've simplified and elided details for the purposes of explication. The truth is in the code. Remarks: It's worth noting that initial thread placement is just that. If there's a gross imbalance between the load on different nodes then the kernel will migrate threads to achieve a better and more even distribution over the set of available nodes. Once a thread runs and gains some affinity for a node, however, it becomes "stickier" under the assumption that the thread has residual cache residency on that node, and that memory allocated by that thread resides on that node given the default "first-touch" page-level NUMA allocation policy. Exactly how the various policies interact and which have precedence under what circumstances could the topic of a future blog entry. The scheduler is work-conserving. The x4800 mentioned above is an interesting system. Each of the 8 sockets houses an Intel 7500-series processor. Each processor has 3 coherent QPI links and the system is arranged as a glueless 8-socket twisted ladder "mobius" topology. Nodes are either 1 or 2 hops distant over the QPI links. As an aside the mapping of logical CPUIDs to physical resources is rather interesting on Solaris/x4800. On SPARC/Solaris the CPUID layout is strictly geographic, with the highest order bits identifying the socket, the next lower bits identifying the core within that socket, following by the pipeline (if present) and finally the logical thread context ("strand") on the core. But on Solaris on the x4800 the CPUID layout is as follows. [6:6] identifies the hyperthread on a core; bits [5:3] identify the socket, or package in Intel terminology; bits [2:0] identify the core within a socket. Such low-level details should be of interest only if you're binding threads -- a bad idea, the kernel typically handles placement best -- or if you're writing NUMA-aware code that's aware of the ambient placement and makes decisions accordingly. Solaris introduced the so-called critical-threads mechanism, which is expressed by putting a thread into the FX scheduling class at priority 60. The critical-threads mechanism applies to placement on cores, not on sockets, however. That is, it's an intra-socket policy, not an inter-socket policy. Solaris 11 introduces the Power Aware Dispatcher (PAD) which packs threads instead of spreading them out in an attempt to be able to keep sockets or cores at lower power levels. Maximum dispersal may be good for performance but is anathema to power management. PAD is off by default, but power management polices constitute yet another confounding factor with respect to scheduling and dispatching. If your threads communicate heavily -- one thread reads cache lines last written by some other thread -- then the new dense packing policy may improve performance by reducing traffic on the coherent interconnect. On the other hand if your threads in your process communicate rarely, then it's possible the new packing policy might result on contention on shared computing resources. Unfortunately there's no simple litmus test that says whether packing or spreading is optimal in a given situation. The answer varies by system load, application, number of threads, and platform hardware characteristics. Currently we don't have the necessary tools and sensoria to decide at runtime, so we're reduced to an empirical approach where we run trials and try to decide on a placement policy. The situation is quite frustrating. Relatedly, it's often hard to determine just the right level of concurrency to optimize throughput. (Understanding constructive vs destructive interference in the shared caches would be a good start. We could augment the lines with a small tag field indicating which strand last installed or accessed a line. Given that, we could augment the CPU with performance counters for misses where a thread evicts a line it installed vs misses where a thread displaces a line installed by some other thread.)

    Read the article

  • What are the advantages of version control systems that version each file separately?

    - by Mike Daniels
    Over the past few years I have worked with several different version control systems. For me, one of the fundamental differences between them has been whether they version files individually (each file has its own separate version numbering and history) or the repository as a whole (a "commit" or version represents a snapshot of the whole repository). Some "per-file" version control systems: CVS ClearCase Visual SourceSafe Some "whole-repository" version control systems: SVN Git Mercurial In my experience, the per-file version control systems have only led to problems, and require much more configuration and maintenance to use correctly (for example, "config specs" in ClearCase). I've had many instances of a co-worker changing an unrelated file and breaking what would ideally be an isolated line of development. What are the advantages of these per-file version control systems? What problems do "whole-repository" version control systems have that per-file version control systems do not?

    Read the article

  • What are the advantages of version control systems that version each file separately?

    - by Mike Daniels
    Over the past few years I have worked with several different version control systems. For me, one of the fundamental differences between them has been whether they version files individually (each file has its own separate version numbering and history) or the repository as a whole (a "commit" or version represents a snapshot of the whole repository). Some "per-file" version control systems: CVS ClearCase Visual SourceSafe Some "whole-repository" version control systems: SVN Git Mercurial In my experience, the per-file version control systems have only led to problems, and require much more configuration and maintenance to use correctly (for example, "config specs" in ClearCase). I've had many instances of a co-worker changing an unrelated file and breaking what would ideally be an isolated line of development. What are the advantages of these per-file version control systems? What problems do "whole-repository" version control systems have that per-file version control systems do not?

    Read the article

  • Microsoft Press Deal of the day 4/Sep/2012 - Programming Microsoft® SQL Server® 2012

    - by TATWORTH
    Today's deal of the day from Microsoft Press at http://shop.oreilly.com/product/0790145322357.do?code=MSDEAL is Programming Microsoft® SQL Server® 2012 "Your essential guide to key programming features in Microsoft® SQL Server® 2012 Take your database programming skills to a new level—and build customized applications using the developer tools introduced with SQL Server 2012. This hands-on reference shows you how to design, test, and deploy SQL Server databases through tutorials, practical examples, and code samples. If you’re an experienced SQL Server developer, this book is a must-read for learning how to design and build effective SQL Server 2012 applications."

    Read the article

  • How do you visually represent programming skills?

    - by TomSchober
    I had a discussion with a recruiter recently that made me wish I could visually represent programming skills. In trying to explain how skills relate, what are the important properties of those skills? Would a tagging model work (i.e. "Design Pattern," "Programming Language," "IDE," or "VCS")? Are they really hierarchical? Clarification: The real problem I see is communicating the level of granularity among skill sets. For instance saying someone "knows Java" is a uselessly broad term in describing what someone can DO. However saying they know how to write web services with the Java Programming language is a bit better. To go even further, saying they know Spring as a tool under all that is probably specific enough. What should we call those levels of granularity? What are the relationships between the terms we use? i.e. Framework to Language, Tool to Language, Framework to Solution(like web services), etc.

    Read the article

  • Programming language features that help to catch bugs early

    - by Christian Neumanns
    Do you know any programming language features that help to detect bugs early in the software development process - ideally at compile-time or else as early as possible at run-time? Examples of well-known and effective bug-reducing features are: Static typing and generic types: type incompatibility errors are detected by the compiler Design by Contract (TM), also called Contract Programming: invalid values are quickly detected at runtime (through preconditions, postconditions and class invariants) Unit testing I ask this question in the context of improving an object-oriented programming language (called Obix) which has been designed from the ground up to 'make it easy to quickly write reliable code'. Besides the features mentioned above this language also incorporates other Fail-fast features such as: Objects are immutable by default Void (null) values are not allowed by default The aim is to add more Fail-fast concepts to the language. If you know other features which help to write less error-prone code then please let us know. Thank you.

    Read the article

  • Good Literature for "Object oriented programming in C"

    - by Dipan Mehta
    This is not a debate question about whether or not C is a good candidate for Object oriented programming or not. Quite often C is the primary platform where the development is happening. I have seen, and hopefully learnt through crawling many open source and commercial projects - that while the language inherently doesn't stop you if you create "non-object" code. However, you can still think in the "Object" way and reasonably write code that captures this designs thinking. For those who has done this, OO way is still the best way to write code even when you are programming in C. While, I have learnt most of it through the hard way, are there any deep literature that can help educate the relatively young guys about how to do OO programming in C?

    Read the article

  • Learning Programming, Suggestions for a roadmap

    - by RisingSun
    Hi, Some background first- I am new to programming and have discovered it rather late in life; Like many hobbyists, my introduction to the subject has been through php/jquery (yes, i know the popular mood around here... they-are-not-real-programminng-languages ;-) ). I like to believe that I am reasonably competent at what I do in my other life and this developing addiction to coding has taken a very heavy toll on my professional prospects. This is the question: What programming languages next? (No plans to ditch php in the immediate future, that will involve rewriting much of my code) Any absolutely essential books I must read? Is it necessary to join a college/university course? Do I need to ditch my other profession to continue serious learning? My goals are: Develop a solid understanding of the science and art of programming. Continue to work on my own web application (Hands on learning suits me best) I am something of a generalist interested in everything from UI to database performance

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >