Search Results

Search found 1981 results on 80 pages for 'lunar science'.

Page 72/80 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Finding Leaders Breakfasts - Adelaide and Perth

    - by rdatson-Oracle
    HR Executives Breakfast Roundtables: Find the best leaders using science and social media! Perth, 22nd July & Adelaide, 24th July What is leadership in the 21st century? What does the latest research tell us about leadership? How do you recognise leadership qualities in individuals? How do you find individuals with these leadership qualities, hire and develop them? Join the Neuroleadership Institute, the Hay Group, and Oracle to hear: 1. the latest neuroscience research about human bias, and how it applies to finding and building better leaders; 2. the latest techniques to recognise leadership qualities in people; 3. and how you can harness your people and social media to find the best people for your company. Reflect on your hiring practices at this thought provoking breakfast, where you will be challenged to consider whether you are using best practices aimed at getting the right people into your company. Speakers Abigail Scott, Hay Group Abigail is a UK registered psychologist with 10 years international experience in the design and delivery of talent frameworks and assessments. She has delivered innovative assessment programmes across a range of organisations to identify and develop leaders. She is experienced in advising and supporting clients through new initiatives using evidence-based approach and has published a number of research papers on fairness and predictive validity in assessment. Karin Hawkins, NeuroLeadership Institute Karin is the Regional Director of NeuroLeadership Institute’s Asia-Pacific region. She brings over 20 years experience in the financial services sector delivering cultural and commercial results across a variety of organisations and functions. As a leadership risk specialist Karin understands the challenge of building deep bench strength in teams and she is able to bring evidence, insight, and experience to support executives in meeting today’s challenges. Robert Datson, Oracle Robert is a Human Capital Management specialist at Oracle, with several years as a practicing manager at IBM, learning and implementing latest management techniques for hiring, deploying and developing staff. At Oracle he works with clients to enable best practices for HR departments, and drawing the linkages between HR initiatives and bottom-line improvements. Agenda 07:30 a.m. Breakfast and Registrations 08:00 a.m. Welcome and Introductions 08:05 a.m. Breaking Bias in leadership decisions - Karin Hawkins 08:30 a.m. Identifying and developing leaders - Abigail Scott 08:55 a.m. Finding leaders, the social way - Robert Datson 09:20 a.m. Q&A and Closing Remarks 09:30 a.m. Event concludes If you are an employee or official of a government organisation, please click here for important ethics information regarding this event. To register for Perth, Tuesday 22nd July, please click HERE To register for Adelaide, Thursday 24th July, please click HERE 1024x768 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Contact: To register or have questions on the event? Contact Aaron Tait on +61 2 9491 1404

    Read the article

  • Know your Data Lineage

    - by Simon Elliston Ball
    An academic paper without the footnotes isn’t an academic paper. Journalists wouldn’t base a news article on facts that they can’t verify. So why would anyone publish reports without being able to say where the data has come from and be confident of its quality, in other words, without knowing its lineage. (sometimes referred to as ‘provenance’ or ‘pedigree’) The number and variety of data sources, both traditional and new, increases inexorably. Data comes clean or dirty, processed or raw, unimpeachable or entirely fabricated. On its journey to our report, from its source, the data can travel through a network of interconnected pipes, passing through numerous distinct systems, each managed by different people. At each point along the pipeline, it can be changed, filtered, aggregated and combined. When the data finally emerges, how can we be sure that it is right? How can we be certain that no part of the data collection was based on incorrect assumptions, that key data points haven’t been left out, or that the sources are good? Even when we’re using data science to give us an approximate or probable answer, we cannot have any confidence in the results without confidence in the data from which it came. You need to know what has been done to your data, where it came from, and who is responsible for each stage of the analysis. This information represents your data lineage; it is your stack-trace. If you’re an analyst, suspicious of a number, it tells you why the number is there and how it got there. If you’re a developer, working on a pipeline, it provides the context you need to track down the bug. If you’re a manager, or an auditor, it lets you know the right things are being done. Lineage tracking is part of good data governance. Most audit and lineage systems require you to buy into their whole structure. If you are using Hadoop for your data storage and processing, then tools like Falcon allow you to track lineage, as long as you are using Falcon to write and run the pipeline. It can mean learning a new way of running your jobs (or using some sort of proxy), and even a distinct way of writing your queries. Other Hadoop tools provide a lot of operational and audit information, spread throughout the many logs produced by Hive, Sqoop, MapReduce and all the various moving parts that make up the eco-system. To get a full picture of what’s going on in your Hadoop system you need to capture both Falcon lineage and the data-exhaust of other tools that Falcon can’t orchestrate. However, the problem is bigger even that that. Often, Hadoop is just one piece in a larger processing workflow. The next step of the challenge is how you bind together the lineage metadata describing what happened before and after Hadoop, where ‘after’ could be  a data analysis environment like R, an application, or even directly into an end-user tool such as Tableau or Excel. One possibility is to push as much as you can of your key analytics into Hadoop, but would you give up the power, and familiarity of your existing tools in return for a reliable way of tracking lineage? Lineage and auditing should work consistently, automatically and quietly, allowing users to access their data with any tool they require to use. The real solution, therefore, is to create a consistent method by which to bring lineage data from these data various disparate sources into the data analysis platform that you use, rather than being forced to use the tool that manages the pipeline for the lineage and a different tool for the data analysis. The key is to keep your logs, keep your audit data, from every source, bring them together and use the data analysis tools to trace the paths from raw data to the answer that data analysis provides.

    Read the article

  • What Counts for a DBA: Skill

    - by drsql
    “Practice makes perfect:” right? Well, not exactly. The reality of it all is that this saying is an untrustworthy aphorism. I discovered this in my “younger” days when I was a passionate tennis player, practicing and playing 20+ hours a week. No matter what my passion level was, without some serious coaching (and perhaps a change in dietary habits), my skill level was never going to rise to a level where I could make any money at the sport that involved something other than selling tennis balls at a sporting goods store. My game may have improved with all that practice but I had too many bad practices to overcome. Practice by itself merely reinforces what we know and what we can figure out naturally. The truth is actually closer to the expression used by Vince Lombardi: “Perfect practice makes perfect.” So how do you get to become skilled as a DBA if practice alone isn’t sufficient? Hit the Internet and start searching for SQL training and you can find 100 different sites. There are also hundreds of blogs, magazines, books, conferences both onsite and virtual. But then how do you know who is good? Unfortunately often the worst guide can be to find out the experience level of the writer. Some of the best DBAs are frighteningly young, and some got their start back when databases were stored on stacks of paper with little holes in it. As a programmer, is it really so hard to understand normalization? Set based theory? Query optimization? Indexing and performance tuning? The biggest barrier often is previous knowledge, particularly programming skills cultivated before you get started with SQL. In the world of technology, it is pretty rare that a fresh programmer will gravitate to database programming. Database programming is very unsexy work, because without a UI all you have are a bunch of text strings that you could never impress anyone with. Newbies spend most of their time building UIs or apps with procedural code in C# or VB scoring obvious interesting wins. Making matters worse is that SQL programming requires mastery of a much different toolset than most any mainstream programming skill. Instead of controlling everything yourself, most of the really difficult work is done by the internals of the engine (written by other non-relational programmers…we just can’t get away from them.) So is there a golden road to achieving a high skill level? Sadly, with tennis, I am pretty sure I’ll never discover it. However, with programming it seems to boil down to practice in applying the appropriate techniques for whatever type of programming you are doing. Can a C# programmer build a great database? As long as they don’t treat SQL like C#, absolutely. Same goes for a DBA writing C# code. None of this stuff is rocket science, as long as you learn to understand that different types of programming require different skill sets and you as a programmer must recognize the difference between one of the procedural languages and SQL and treat them differently. Skill comes from practicing doing things the right way and making “right” a habit.

    Read the article

  • When should I use a Process Model versus a Use Case?

    - by Dave Burke
    This Blog entry is a follow on to https://blogs.oracle.com/oum/entry/oum_is_business_process_and and addresses a question I sometimes get asked…..i.e. “when I am gathering requirements on a Project, should I use a Process Modeling approach, or should I use a Use Case approach?” Not surprisingly, the short answer is “it depends”! Let’s take a scenario where you are working on a Sales Force Automation project. We’ll call the process that is being implemented “Lead-to-Order”. I would typically think of this type of project as being “Process Centric”. In other words, the focus will be on orchestrating a series of human and system related tasks that ultimately deliver value to the business in a cost effective way. Put in even simpler terms……implement an automated pre-sales system. For this type of (Process Centric) project, requirements would typically be gathered through a series of Workshops where the focal point will be on creating, or confirming, the Future-State (To-Be) business process. If pre-defined “best-practice” business process models exist, then of course they could and should be used during the Workshops, but even in their absence, the focus of the Workshops will be to define the optimum series of Tasks, their connections, sequence, and dependencies that will ultimately reflect a business process that meets the needs of the business. Now let’s take another scenario. Assume you are working on a Content Management project that involves automating the creation and management of content for User Manuals, Web Sites, Social Media publications etc. Would you call this type of project “Process Centric”?.......well you could, but it might also fall into the category of complex configuration, plus some custom extensions to a standard software application (COTS). For this type of project it would certainly be worth considering using a Use Case approach in order to 1) understand the requirements, and 2) to capture the functional requirements of the custom extensions. At this point you might be asking “why couldn’t I use a Process Modeling approach for my Content Management project?” Well, of course you could, but you just need to think about which approach is the most effective. Start by analyzing the types of Tasks that will eventually be automated by the system, for example: Best Suited To? Task Name Process Model Use Case Notes Manage outbound calls Ö A series of linked human and system tasks for calling and following up with prospects Manage content revision Ö Updating the content on a website Update User Preferences Ö Updating a users display preferences Assign Lead Ö Reviewing a lead, then assigning it to a sales person Convert Lead to Quote Ö Updating the status of a lead, and then converting it to a sales order As you can see, it’s not an exact science, and either approach is viable for the Tasks listed above. However, where you have a series of interconnected Tasks or Activities, than when combined, deliver value to the business, then that would be a good indicator to lead with a Process Modeling approach. On the other hand, when the Tasks or Activities in question are more isolated and/or do not cross traditional departmental boundaries, then a Use Case approach might be worth considering. Now let’s take one final scenario….. As you captured the To-Be Process flows for the Sales Force automation project, you discover a “Gap” in terms of what the client requires, and what the standard COTS application can provide. Let’s assume that the only way forward is to develop a Custom Extension. This would now be a perfect opportunity to document the functional requirements (behind the Gap) using a Use Case approach. After all, we will be developing some new software, and one of the most effective ways to begin the Software Development Lifecycle is to follow a Use Case approach. As always, your comments are most welcome.

    Read the article

  • How do you go from a so so programmer to a great one? [closed]

    - by Cervo
    How do you go from being an okay programmer to being able to write maintainable clean code? For example David Hansson was writing Basecamp when in the process he created Rails as part of writing Basecamp in a clean/maintainable way. But how do you know when there is value in a side project like that? I have a bachelors in computer science, and I am about to get a masters and I will say that colleges teach you to write code to solve problems, not neatly or anything. Basically you think of a problem, come up with a solution, and write it down...not necessarily the most maintainable way in the world. Also my first job was in a startup, and now my third is in a small team in a large company where the attitude was/is get it done yesterday (also most of my jobs are mainly database development with SQL with a few ASP.NET web pages/.NET apps on the side). So of course cut/paste is more favored than making things more cleanly. And they would rather have something yesterday even if you have to rewrite it next month rather than to have something in a week that lasts for a year. Also spaghetti code turns up all over the place, and it takes very smart people to write/understand/maintain spaghetti code...However it would be better to do things so simple/clean that even a caveman/woman could do maintenance. Also I get very bored/unmotivated having to go modify the same things cut/pasted in a few locations. Is this the type of skill that you need to learn by working with a serious software organization that has an emphasis on maintenance and maybe even an architect who designs a system architecture and reviews code? Could you really learn it by volunteering on an open source project (it seems to me that a full time programmer job is way more practice than a few hours a week on an open source project)? Is there some course where you can learn this? I can attest that graduate school and undergraduate school do not really emphasize clean software at all. They just teach the structures/algorithms and then send you off into the world to solve problems. Overall I think the first thing is learning to write clean/maintainable code within the bounds of the project in order to become a good programmer. Then the next thing is learning when you need to do a side project (like a framework) to make things more maintainable/clean even while you still deliver things for the deadline in order to become a great programmer. For example, you are making an SQL report and someone gives you 100 calculations for individual columns. At what point does it make sense to construct a domain specific language to encode the rules in simply and then generate all the SQL as opposed to cut/pasting the query from the table a bunch of times and then adjusting each query to do the appropriate calculations. This is the type of thing I would say a great programmer would know. He/she would maybe even know ways to avoid the domain specific language and to still do all the calculations without creating an unmaintainable mess or a ton of repetitive code to cut/paste everywhere.

    Read the article

  • Making user input/math on data fast, unlike excel type programs

    - by proGrammar
    I'm creating a research platform solely for myself to do some research on data. Programs like excel are terribly slow for me so I'm trying to come up with another solution. Originally I used excel. A1 was the cell that contained the data and all other cells in use calculated something on A1, or on other cells, that all could be in the end traced to A1. A1 was like an element of an array, I then I incremented it to go through all my data. This was way too slow. So the only other option I found originally was to hand code in c# the calculations inside a loop. Then I simply recompiled each time I changed my math. This was terribly slow to do and I had to order everything correctly so things would update correctly (dependencies). I could have also used events, but hand coding events for each cell like calculation would also be very slow. Next I created an application to read Excel and to perfectly imitate it. Which is what I now use. Basically I write formulas onto a fraction of my data to get live results inside excel. Then my program reads excel, writes another c# program, compiles it, and runs that program which runs my excel created formulas through a lot more data a whole lot faster. The advantage being my application dependency sorts everything (or I could use events) so I don't have to (like excel does) And of course the speed. But now its not a single application anymore. Instead its 2 applications, one which only reads my formulas and writes another program. The other one being the result which only lives for a short while before I do other runs through my data with different formulas / settings. So I can't see multiple results at one time without introducing even more programs like a database or at least having the 2 applications talking to each other. My idea was to have a dll that would be written, compiled, loaded, and unloaded again and again. So a self-updating program, sort of. But apparently that's not possible without another appdomain which means data has to be marshaled to be moved between the appdomains. Which would slow things down, not for summaries, but for other stuff I need to do with all my data. I'm also forgetting to mention a huge problem with restarting an application again and again which is having to reload ALL my data into memory again and again. But its still a whole lot faster than excel. I'm really super puzzled as to what people do when they want to research data fast. I'm completely unable to have a program accept user input and having it fast. My understanding is that it would have to do things like excel which is to evaluate strings again and again. So my only option is to repeatedly compile applications. Do I have a correct understanding on computer science? I've only just began programming, and didn't think I would have to learn much to do some simple math on data. My understanding is its either compiling my user defined stuff to a program or evaluating them from a string or something stupid again and again. And my only option is to probably switch operating systems or something to be able to have a program compile and run itself without stopping (writing/compiling dll, loading dll to program, unloading, and repeating). Can someone give me some idea on how computers work? Is anything better possible? Like a running program, that can accept user input and compile it and then unload it later? I mean heck operating systems dont need to be RESTARTED with every change to user input. What is this the cave man days? Sorry, it's just so super frustrating not knowing what one can do, and can't do. If only I could understand and learn this stuff fast enough.

    Read the article

  • Encryption is hard: AES encryption to Hex

    - by Rob Cameron
    So, I've got an app at work that encrypts a string using ColdFusion. ColdFusion's bulit-in encryption helpers make it pretty simple: encrypt('string_to_encrypt','key','AES','HEX') What I'm trying to do is use Ruby to create the same encrypted string as this ColdFusion script is creating. Unfortunately encryption is the most confusing computer science subject known to man. I found a couple helper methods that use the openssl library and give you a really simple encryption/decryption method. Here's the resulting string: "\370\354D\020\357A\227\377\261G\333\314\204\361\277\250" Which looks unicode-ish to me. I've tried several libraries to convert this to hex but they all say it contains invalid characters. Trying to unpack it results in this: string = "\370\354D\020\357A\227\377\261G\333\314\204\361\277\250" string.unpack('U') ArgumentError: malformed UTF-8 character from (irb):19:in `unpack' from (irb):19 At the end of the day it's supposed to look like this (the output of the ColdFusion encrypt method): F8E91A689565ED24541D2A0109F201EF Of course that's assuming that all the padding, initialization vectors, salts, cypher types and a million other possible differences all line up. Here's the simple script I'm using to encrypt/decrypt: def aes(m,k,t) (aes = OpenSSL::Cipher::Cipher.new('aes-256-cbc').send(m)).key = Digest::SHA256.digest(k) aes.update(t) << aes.final end def encrypt(key, text) aes(:encrypt, key, text) end def decrypt(key, text) aes(:decrypt, key, text) end Any help? Maybe just a simple option I can pass to OpenSSL::Cipher::Cipher that will tell it to hex-encode the final string?

    Read the article

  • Advice for last year college graduates

    - by Tomh
    Hey guys, I know there are many "advice" questions around this site. But I wanted to to narrow mine down to last year college students, in my case my last year as Master student in computer science. So far is a list of things I've done during my time in college (which I can recommend others to do aswell): Code a lot I've written several hobby projects, had part time jobs, entered the Imagine cup from Microsoft, took programming extensive courses and did freelance gigs. Read a lot I've bought most top books from the recommended book topics here, to be honest I have not read them all. learn different languages I've tried several languages including Haskell, Java, Python, Ruby, Lisp, Prolog, C#, PHP, JS, AS3 and possibly some more I forgot. Tried to start a blog Joel recommends to learn how to write, I tried starting a couple of blogs to improve upon this, I gave up on all instances after writing about three posts. It was just not my thing... Have a portfolio of launched projects/programs I'm busy with this, have a couple of finished, working projects I worked on to show to people. So this is my last year. Is there anything else you can recommend a last year college student to do before hitting the job market? Personally I'm tempted to spend my time on the following: Practice algorithm design Learn and memorize the usage of the low level API's of your favorite language Polish your portfolio Why? Because those first two will make sure you pass the majority of the interviews, here in Holland (I could be wrong). I rather not spend my time on those first two points, but I have to be realistic and thats just my experience on what kind of questions you'll get when you apply. The third point is my hope that I won't have to answer questions about the amount of standard types in c# for example if they can see I get projects done and launched. But I'm still graduating, so I don't know anything :), and many of you might be hiring grads on a recent base and could tell me and other interested people what you wish that the recent grads you interviewed would have done before they applied.

    Read the article

  • Is LaTeX worth learning today?

    - by Ender
    I know that LaTeX is big in the world of academia, and was probably a big name in desktop publishing before the glory days of WordPerfect and Microsoft Office but as a Windows user that is interested in the power of LaTeX and the general smoothness of a LaTeX generated page is it really worth learning? In a couple of months I'll be starting my final year in Computer Science and LaTeX has been bounced around the campus by many of the Linux geeks. In reality, is there any need to use it today? What will I actually gain from it and will I enjoy using it? Finally, how does one use LaTeX on a Windows machine? What software do I really need? I've read a couple of guides but many of them seem like overkill. Please help break a LaTeX newbie into the world of professional academic publishing! EDIT: I've toyed with LaTeX for a while, and have even learned that it's pronounced "lay-tech", not "lay-tecks". I'll agree once again with the accepted answer in saying that MiKTeX is the best solution for Windows users.

    Read the article

  • Pseudocode: a clear definition?

    - by Cian E
    The following code is an example of what I think would qualify as pseudocode, since it does not execute in any language but the logic is correct. string checkRubric(gpa, major) bool brake = false num lastRange num rangeCounter string assignment = "unassigned" array bus['business']= array('person a'=>array(0, 2.9), 'person b'=>array(3, 4)) array cis['computer science']= array('person c'=>array(0, 2.9), 'person d'=>array(3, 4)) array lib['english']= array('person e'=>array(0, 4)) array rubric = array(bus, cis, lib) foreach (rubric as fieldAr) foreach (fieldAr as field => advisorAr) if (major == field) foreach (advisorAr as advisor => gpaRangeAr) rangeCounter = 0 foreach (gpaRangeAr as gpaValue) if (rangeCounter < 1) lastRange = gpaValue else if (gpa >= lastRange && gpa <= gpaValue) assignment = advisor brake = true break endif rangeCounter++ endforeach if (brake == true) break endif endforeach if (brake == true) break endif endif endforeach if (brake == true) break endif endforeach return assignment For the past couple of weeks I've been trying to create a clear definition of what pseudocode actually is. Is it relative to the programmer or is there an actual clearcut syntax? I say pseudocode is any code that does not execute, how about you? Thanks (links to this subject welcome)

    Read the article

  • [OffTopic] PhD is it worth it?

    - by Zenzen
    So I'll be graduating next year (hopefully) and lately I started thinking about getting a PhD in computer science (to be more precise I was thinking about something related to "distributed systems", don't have any specific topics yet in mind), but is it really worth it? A PhD course here lasts 4 years and is free IF you're doing the "normal one" (which means you have class from monday to friday) OR you have to pay if you want to study on the weekends. So I've been thinking about getting a normal full time job (as a JEE programmer) and doing my PhD during the weekends OR doing the normal PhD course (mon-fri) and getting a part time job as a software dev (the main reason I want a job is simple - I need to eat). That would give me practical working experience and theoretical knowledge+a PhD, but it would also mean working 7days a week from 9 to 5 for 4 years straight (sounds like an overkill). Is it, job wise, really worth getting a PhD? As an employer do you prefer people with PhDs or MScs? This might be kinda important - I'm not from the US or western europe, how are PhDs from other countries (I am graduating one of the best, if not the best, school in my country but it's still a central european country...) seen? Are they useless? I won't hide it - after graduation I am thinking about moving abroad.

    Read the article

  • Ideas for networking project

    - by Chris Thompson
    Hi all, I'm a graduating senior in computer science taking a computer networks class and I'm trying to figure out my final project. I normally am not at a loss for ideas but be it senioritis or straight burn out, I've got nothing. I've done some fun stuff in the past, but I just can't seem to come up with a good idea. Given the mass of brilliance on this site, I figured it would be a good place to request some suggestions. To give you an idea of scope, it's due in about a month and I would consider myself proficient with mobile architectures like Android (although I have no iPhone experience) along with Java, C++, etc. If you can suggest an idea, I'd be happy to make it work in whatever language I know. Like I said, I'm a senior and will be graduating so I'd rather not take on something that would kill me... Also, I'd be happy to make it open source if it's an idea you'd always wanted to work on but didn't have the time to start. Thanks in advance for the help! Chris Edit 1: Thanks so much for the suggestions everyone! Unfortunately I've actually already written a chat client (for a network security class) and I think I'd run into some honor code issues if I did that again, although that's always a great option. I like the game idea and that's actually something I've never attempted before (in any capacity) although given that, I'm a little scared about time...

    Read the article

  • Language+IDE for teaching high school students?

    - by daveagp
    I'm investigating languages and IDEs for a project involving teaching high-school students (around grade 11). It will teach basics of programming as an introduction to computer science (e.g., including how numbers/strings/characters are represented, using procedures and arrays, control flow, a little bit of algorithms, only very basic I/O). The non-negotiable requirements for this project are: a free up-to-date cross-platform IDE (Win & Mac incl. 64-bit) with debug a compiler where it's easy to learn from your mistakes together with the IDE, a gentle installation+learning curve So far, the best options I see are the following. Are there others I should know about? I am giving a short explanation with each one to generally show what I am looking for. In order from most to least promising: Pascal + FreePascal IDE (it seems a little buggy but actively developed?) Python + Eclipse + PyDev (good but features are overwhelming/hard to navigate) Groovy + Eclipse ('') Python + IDLE (looks unnatural to do debugging, to me) Pascal + Lazarus (IDE overwhelming, e.g. not obvious how to "start from scratch") Preferably, as a rule of thumb, the language should be direct enough that you don't need to wrap every program in a class, don't need to reference a System object to println, etc. I tried a little bit to see if there is something in JavaScript or (non-Visual) Basic along the lines of what I want, but found nothing so far. I would say that C/C++/C#, Java, Ruby, Lisp, VB do not fit my criteria for languages for this project. To reiterate my questions: are any of those 5 options really awesome or un-awesome? Are there other options which are even MORE awesome? Anything for Basic or JavaScript which meets all of the criteria? Thanks!

    Read the article

  • Java parsing XML document gives "Content not allowed in prolog." error

    - by thechiman
    I am writing a program in Java that takes a custom XML file and parses it. I'm using the XML file for storage. I am getting the following error in Eclipse. [Fatal Error] :1:1: Content is not allowed in prolog. org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:239) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:283 ) at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:208) at me.ericso.psusoc.RequirementSatisfier.parseXML(RequirementSatisfier.java:61) at me.ericso.psusoc.RequirementSatisfier.getCourses(RequirementSatisfier.java:35) at me.ericso.psusoc.programs.RequirementSatisfierProgram.main(RequirementSatisfierProgram.java:23 ) The beginning of the XML file is included: <?xml version="1.0" ?> <PSU> <Major id="IST"> <name>Information Science and Technology</name> <degree>B.S.</degree> <option> Information Systems: Design and Development Option</option> <requirements> <firstlevel type="General_Education" credits="45"> <component type="Writing_Speaking">GWS</component> <component type="Quantification">GQ</component> The program is able to read in the XML file but when I call DocumentBuilder.parse(XMLFile) to get a parsed org.w3c.dom.Document, I get the error above. It doesn't seem to me that I have invalid content in the prolog of my XML file. I can't figure out what is wrong. Please help. Thanks.

    Read the article

  • How to inject a Session Bean into a Message Driven Bean?

    - by Hank
    Hi guys, I'm reasonably new to JEE, so this might be stupid.. bear with me pls :D I would like to inject a stateless session bean into a message-driven bean. Basically, the MDB gets a JMS message, then uses a session bean to perform the work. The session bean holds the business logic. Here's my Session Bean: @Stateless public class TestBean implements TestBeanRemote { public void doSomething() { // business logic goes here } } The matching interface: @Remote public interface TestBeanRemote { public void doSomething(); } Here's my MDB: @MessageDriven(mappedName = "jms/mvs.TestController", activationConfig = { @ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge"), @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue") }) public class TestController implements MessageListener { @EJB private TestBean testBean; public TestController() { } public void onMessage(Message message) { testBean.doSomething(); } } So far, not rocket science, right? Unfortunately, when deploying this to glassfish v3, and sending a message to the appropriate JMS Queue, I get errors that glassfish is unable to locate the TestBean EJB: java.lang.IllegalStateException: Exception attempting to inject Remote ejb-ref name=mvs.test.TestController/testBean,Remote 3.x interface =mvs.test.TestBean,ejb-link=null,lookup=null,mappedName=,jndi-name=mvs.test.TestBean,refType=Session into class mvs.test.TestController Caused by: com.sun.enterprise.container.common.spi.util.InjectionException: Exception attempting to inject Remote ejb-ref name=mvs.test.TestController/testBean,Remote 3.x interface =mvs.test.TestBean,ejb-link=null,lookup=null,mappedName=,jndi-name=mvs.test.TestBean,refType=Session into class mvs.test.TestController Caused by: javax.naming.NamingException: Lookup failed for 'java:comp/env/mvs.test.TestController/testBean' in SerialContext [Root exception is javax.naming.NamingException: Exception resolving Ejb for 'Remote ejb-ref name=mvs.test.TestController/testBean,Remote 3.x interface =mvs.test.TestBean,ejb-link=null,lookup=null,mappedName=,jndi-name=mvs.test.TestBean,refType=Session' . Actual (possibly internal) Remote JNDI name used for lookup is 'mvs.test.TestBean#mvs.test.TestBean' [Root exception is javax.naming.NamingException: Lookup failed for 'mvs.test.TestBean#mvs.test.TestBean' in SerialContext [Root exception is javax.naming.NameNotFoundException: mvs.test.TestBean#mvs.test.TestBean not found]]] So my questions are: - is this the correct way of injecting a session bean into another bean (particularly a message driven bean)? - why is the naming lookup failing? Thanks for all your help! Cheers, Hank

    Read the article

  • Front End Developer v/s PHP-MySQL Engineer

    - by user301943
    Hello, I want to decide which of this would be a more viable career option? I am ready to quit my current job and hence I am looking for new opportunity. Current job is maintainence and no more active development. My current role is of a PHP/MySQL Developer. I very well understand web-programming and am comfortable with RoR/Sinatra/Zend MVC/JQuery/JSON manipulation, etc. I understand MySQL InnoDB/MyISAM engine and how one differs from the other, etc. Basically, I could very well manage the deployment of a web-application end-to-end including configuration of Apache/Nginx servers, memcache,etc On the other hand, I am being offered a Sr.Front End Web developer that would require me to extensively write HTML/CSS crossbrowser/crossplatform compliant code. I very well understand XHTML/CSS/Box model etc. I would be working on Drupal for the management of websites. While I understand continuing to work on server-side technologies would always be a good career path, how would the role of Core front-end developer turn out to be? If I take this opportunity, will I eventually get a chance to focus onto UCD, HCI, Information Architect,etc. So are these kinda roles possible if I focus on front end development? No offenses to the Front end developers, just want to understand if this is something I want to gain a mastery over. I have 2 yrs of industry experience after graduating with a MS-Computer Science. Although, I have a CS degree, if I were to take uip serious front-end role; I could probably go back and take up some design/HCI/UI courses. Please advise.

    Read the article

  • CS Master's Degree Project vs. Thesis options

    - by Nwosh
    I'm doing a master's degree in computer science, and I'm currently at the point where I have to decide between the thesis and non-thesis options offered by my university. The thesis option was my first choice, it entails taking less courses but tends to take more time doing your thesis. The non-thesis option involves taking more coursework, taking a comprehensive exam, and doing a project in one semester with a faculty member. I'd like to pursue a PhD degree eventually (although not right away, I want to get some years of professional experience first), and I heard that having demonstrated the ability to work on a thesis helps a lot with admission (like: not doing thesis raises questions and suggests not being interested in research) and that the experience itself is very good. At the same time, almost everyone I know who did a thesis at my university took a long time (2-3 years), in theory it could be done in 1.5 years. I'm a part time student and I don't really want to spend so much time just getting a master's degree, I could still publish a few papers while working on the project option and I'd be done in a year or so, additionally, I heard having a master's degree with a project and more coursework is more desirable for the industry. So, when applying for a PhD degree in CS at some of the better universities, would the time spent working on the master's thesis help in getting me accepted? Or should I opt for the non-thesis option and hope that the extra coursework and publishing some papers would make up for not working on a thesis?

    Read the article

  • Lucene HTMLFormatter skipping last character

    - by Midhat
    I have this simple Lucene search code (Modified from http://www.lucenetutorial.com/lucene-in-5-minutes.html) class Program { static void Main(string[] args) { StandardAnalyzer analyzer = new StandardAnalyzer(); Directory index = new RAMDirectory(); IndexWriter w = new IndexWriter(index, analyzer, true, IndexWriter.MaxFieldLength.UNLIMITED); addDoc(w, "Table 1 <table> content </table>"); addDoc(w, "Table 2"); addDoc(w, "<table> content </table>"); addDoc(w, "The Art of Computer Science"); w.Close(); String querystr = "table"; Query q = new QueryParser("title", analyzer).Parse(querystr); Lucene.Net.Search.IndexSearcher searcher = new Lucene.Net.Search.IndexSearcher(index); Hits hitsFound = searcher.Search(q); SimpleHTMLFormatter formatter = new SimpleHTMLFormatter("*", "*"); Highlighter highlighter = null; highlighter = new Highlighter(formatter, new QueryScorer(searcher.Rewrite(q))); for (int i = 0; i < hitsFound.Length(); i++) { Console.WriteLine(highlighter.GetBestFragment(analyzer, "title", hitsFound.Doc(i).Get("title"))); // Console.WriteLine(hitsFound.Doc(i).Get("title")); } Console.ReadKey(); } private static void addDoc(IndexWriter w, String value) { Document doc = new Document(); doc.Add(new Field("title", value, Field.Store.YES, Field.Index.ANALYZED)); w.AddDocument(doc); } } The highlighted results always seem to skip the closing '' of my last table tag. Any suggestions?

    Read the article

  • Project Euler: Programmatic Optimization for Problem 7?

    - by bmucklow
    So I would call myself a fairly novice programmer as I focused mostly on hardware in my schooling and not a lot of Computer Science courses. So I solved Problem 7 of Project Euler: By listing the first six prime numbers: 2, 3, 5, 7, 11, and 13, we can see that the 6th prime is 13. What is the 10001st prime number? I managed to solve this without problem in Java, but when I ran my solution it took 8 and change seconds to run. I was wondering how this could be optimized from a programming standpoint, not a mathematical standpoint. Is the array looping and while statements the main things eating up processing time? And how could this be optimized? Again not looking for a fancy mathematical equation..there are plenty of those in the solution thread. SPOILER My solution is listed below. public class PrimeNumberList { private ArrayList<BigInteger> primesList = new ArrayList<BigInteger>(); public void fillList(int numberOfPrimes) { primesList.add(new BigInteger("2")); primesList.add(new BigInteger("3")); while (primesList.size() < numberOfPrimes){ getNextPrime(); } } private void getNextPrime() { BigInteger lastPrime = primesList.get(primesList.size()-1); BigInteger currentTestNumber = lastPrime; BigInteger modulusResult; boolean prime = false; while(!prime){ prime = true; currentTestNumber = currentTestNumber.add(new BigInteger("2")); for (BigInteger bi : primesList){ modulusResult = currentTestNumber.mod(bi); if (modulusResult.equals(BigInteger.ZERO)){ prime = false; break; } } if(prime){ primesList.add(currentTestNumber); } } } public BigInteger get(int primeTerm) { return primesList.get(primeTerm - 1); } }

    Read the article

  • How to force c# binary int division to return a double?

    - by Wayne
    How to force double x = 3 / 2; to return 1.5 in x without the D suffix or casting? Is there any kind of operator overload that can be done? Or some compiler option? Amazingly, it's not so simple to add the casting or suffix for the following reason: Business users need to write and debug their own formulas. Presently C# is getting used like a DSL (domain specific language) in that these users aren't computer science engineers. So all they know is how to edit and create a few types of classes to hold their "business rules" which are generally just math formulas. But they always assume that double x = 3 / 2; will return x = 1.5 however in C# that returns 1. A. they always forget this, waste time debugging, call me for support and we fix it. B. they think it's very ugly and hurts the readability of their business rules. As you know, DSL's need to be more like natural language. Yes. We are planning to move to Boo and build a DSL based on it but that's down the road. Is there a simple solution to make double x = 3 / 2; return 1.5 by something external to the class so it's invisible to the users? Thanks! Wayne

    Read the article

  • How to force c# binary int division to return a double?

    - by Wayne
    How to force double x = 3 / 2; to return 1.5 in x without the D suffix or casting? Is there any kind of operator overload that can be done? Or some compiler option? Amazingly, it's not so simple to add the casting or suffix for the following reason: Business users need to write and debug their own formulas. Presently C# is getting used like a DSL (domain specific language) in that these users aren't computer science engineers. So all they know is how to edit and create a few types of classes to hold their "business rules" which are generally just math formulas. But they always assume that double x = 3 / 2; will return x = 1.5 however in C# that returns 1. A. they always forget this, waste time debugging, call me for support and we fix it. B. they think it's very ugly and hurts the readability of their business rules. As you know, DSL's need to be more like natural language. Yes. We are planning to move to Boo and build a DSL based on it but that's down the road. Is there a simple solution to make double x = 3 / 2; return 1.5 by something external to the class so it's invisible to the users? Thanks! Wayne

    Read the article

  • What ever happened to APL?

    - by lkessler
    When I was at University 30 years ago, I used a programming language called APL. I believe the acronym stood for "A Programming Language", This language was interpretive and was especially useful for array and matrix operations with powerful operators and library functions to help with that. Did you use APL? Is this language still in use anywhere? Is it still available, either commercially or open source? I remember the combinatorics assignment we had. It was complex. It took a week of work for people to program it in PL/1 and those programs ranged from 500 to 1000 lines long. I wrote it in APL in under an hour. I left it at 10 lines for readability, although I should have been a purist and worked another hour to get it into 1 line. The PL/1 programs took 1 or 2 minutes to run on the IBM mainframe and solve the problem. The computer charge was $20. My APL program took 2 hours to run and the charge was $1,500 which was paid for by our Computer Science Department's budget. That's when I realized that a week of my time is worth way more than saving some $'s in someone else's budget. I got an A+ in the course. p.s. Don't miss this presentation entitled: "APL one of the greatest programming languages ever"

    Read the article

  • LaTex: why partially showing up references?

    - by HH
    The bib.style part may be the problem. If I do not reference to references, do they show up? I have listed all errors below, the file compiles so I don't know whether they are related to partially-showing-up-references. For example, work with many authors gets only one author listed. I want to see references fully, not partially. Headers $ grep bib header.tex \usepackage{natbib} \bibliographystyle{abbrvnat} Errors $ grep -n -A 7 -B 7 Error *.log combined.log-505-! Illegal unit of measure (pt inserted). combined.log-506-<to be read again> combined.log-507- \futurelet combined.log-508-l.353 \hline combined.log-509- combined.log-510-? combined.log-511- combined.log:512:! Package caption Error: cite undefined. combined.log-513- combined.log-514-See the caption package documentation for explanation. combined.log-515-Type H <return> for immediate help. combined.log-516- ... combined.log-517- combined.log-518-l.374 ...n={CPU O(mlog(n))}, cite={topcoder:node}] combined.log-519- -- combined.log-559- [] combined.log-560- combined.log-561-) [10] combined.log-562-\openout2 = `references.aux'. combined.log-563- combined.log-564- (./references.tex combined.log-565- combined.log:566:! LaTeX Error: \include cannot be nested. combined.log-567- combined.log-568-See the LaTeX manual or LaTeX Companion for explanation. combined.log-569-Type H <return> for immediate help. combined.log-570- ... combined.log-571- combined.log-572-l.1 \include{timeUse.tex} Bibs.bib @misc{ Gundersen, author = "G. Gundersen", title = "Data Structures in Java for Matrix Computations", year = "2002" } @book{ Lennart, author = "R. Lennart", title = "Mathematics Handbook for Science and Engineering BETA", year = "2004" }

    Read the article

  • Non-Mainstream Languages, Bad for your resume?

    - by Joe
    Hi folks, I got my BS in Computer Science about seven years ago. I spent two years in neuroscience research and the next three providing what amounts to tech support. But I love computer programming - and I have since written, as a freelancer, non-trivial commercial code in Haskell, Smalltalk, and Objective-C. I used these languages because I find them rewarding, they make me a better programmer and thus, I thought, more attractive to companies. However the polar opposite has occured and I am now unhireable. The freelance market has bottomed out and I am looking for regular employment. But I am being repeatedly turned down, even for entry-level positions, because I don't specifically fit the requirements - eg. Java programmer with 2+ years with JUnit, JavaMail, Servlets etc. And none of the hiring managers, let alone the recruiters, have heard of either Haskell or Smalltalk and more disturbing is their thinly veiled contempt for my background. My question is , how should I market myself to these positions? Is anyone here in a similar position? What should I be doing different professionally? More broadly is this contempt for non-mainstream experience occurring everywhere or just my town? And if there are any hiring managers reading this, I'd love to hear your side. Please be brutally honest. thanks, joe

    Read the article

  • Bringing ideas to life

    - by danixd
    I understand that this forum is filled with computer science genius's but there must come a point where your expertise in coding can not keep up with your creativity. I am a front end programmer and designer of sorts. I have so many ideas, with no ways of implementing them. I know of platforms that can help, I am really into the arduino project for physical ideas and I know Flash is an amazing platform for software, but when it comes to complex ideas, even such as a 3D game, things can be too much to handle individually. When you have an idea, what is your typical methodology of bring them to life? Write down the idea, save if for later (never gets made right...) Try building it yourself, on the platforms you work with and are good at Consult people on what platform it is best suited to and collaborate with an expert in that field to do the dirty work Consult people on what platform it is best suited to and try to learn it and make it yourself (at least the alpha stages) Are there any communities that support this idea of bringing things to life? Can you pitch ideas to x company as a business model and hope they take you on / sponsor you? Do you have to spend lots of $$$ to get things made?

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >