Search Results

Search found 2082 results on 84 pages for 'lessons learned'.

Page 45/84 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Ops Center 12c - Update - Provisioning Solaris on x86 Using a Card-Based NIC

    - by scottdickson
    Last week, I posted a blog describing how to use Ops Center to provision Solaris over the network via a NIC on a card rather than the built-in NIC.  Really, that was all about how to install Solaris on a SPARC system.  This week, we'll look at how to do the same thing for an x86-based server. Really, the overall process is exactly the same, at least for Solaris 11, with only minor updates. We will focus on Solaris 11 for this blog.  Once I verify that the same approach works for Solaris 10, I will provide another update. Booting Solaris 11 on x86 Just as before, in order to configure the server for network boot across a card-based NIC, it is necessary to declare the asset to associate the additional MACs with the server.  You likely will need to access the server console via the ILOM to figure out the MAC and to get a good idea of the network instance number.  The simplest way to find both of these is to start a network boot using the desired NIC and see where it appears in the list of network interfaces and what MAC is used when it tries to boot.  Go to the ILOM for the server.  Reset the server and start the console.  When the BIOS loads, select the boot menu, usually with Ctrl-P.  This will give you a menu of devices to boot from, including all of the NICs.  Select the NIC you want to boot from.  Its position in the list is a good indication of what network number Solaris will give the device. In this case, we want to boot from the 5th interface (GB_4, net4).  Pick it and start the boot processes.  When it starts to boot, you will see the MAC address for the interface Once you have the network instance and the MAC, go through the same process of declaring the asset as in the SPARC case.  This associates the additional network interface with the server.. Creating an OS Provisioning Plan The simplest way to do the boot via an alternate interface on an x86 system is to do a manual boot.  Update the OS provisioning profile as in the SPARC case to reflect the fact that we are booting from a different interface.  Update, in this case, the network boot device to be GB_4/net4, or the device corresponding to your network instance number.  Configure the profile to support manual network boot by checking the box for manual boot in the OS Provisioning profile. Booting the System Once you have created a profile and plan to support booting from the additional NIC, we are ready to install the server. Again, from the ILOM, reset the system and start the console.  When the BIOS loads, select boot from the Boot Menu as above.  Select the network interface from the list as before and start the boot process.  When the grub bootloader loads, the default boot image is the Solaris Text Installer.  On the grub menu, select Automated Installer and Ops Center takes over from there. Lessons The key lesson from all of this is that Ops Center is a valuable tool for provisioning servers whether they are connected via built-in network interfaces or via high-speed NICs on cards.  This is great news for modern datacenters using converged network infrastructures.  The process works for both SPARC and x86 Solaris installations.  And it's easy and repeatable.

    Read the article

  • Most suited technology for browser games?

    - by Tingle
    I was thinking about making a 2D MMO which I would in the long run support on various plattforms like desktop, mac, browser, android and ios. The server will be c++/linux based and the first client would go in the browser. So I have done some research and found that webgl and flash 11 support hardware accelerated rendering, I saw some other things like normal HTML5 painting. So my question is, which technology should I use for such a project? My main goal would be that the users have a hassle free experience using what there hardware can give them with hardware acceleration. And the client should work on the most basic out-of-the-box pc's that any casual pc or mac user has. And another criteria would be that it should be developer friendly. I've messed with webgl abit for example and that would require writing a engine from scratch - which is acceptable but not preferred. Also, in case of non-actionscript, which kind language is most prefered in terms of speed and flexability. I'm not to fond of javascript due to the garbage collector but have learned to work around it. Thank you for you time.

    Read the article

  • Storing data offline with javascript

    - by Walker
    My question is about storing data offline and potentially whether I will need to bring in an outside programmer or could this be learned within a few weeks? The website I am working on will have an interface where users will login and go through a series of quizzes in the form of checkbox, drop down menus, and others. Each page/quiz area could have 20-100 total checkboxes in a series of 3-5 rows because of the comprehensive nature of course. This I can do - I know how to code the quiz and return a correct or incorrect answer based on each individual checkbox and present a cumulative score (ie: you got 57% correct). The issue lies in the fact that I would like to save the users results and keep them informed of their progress. When they complete all of the quizzes, I would like to have a visual output of their performance in each area. Storing the output from their results offline is where I think I may run into a problem with my lack of coding experience. I would also like to have a sidebar with their progress of each section (10-15) with a green percentage completion bar or a % correct which would draw from this. I have never had to code something that stores information like this offline - so back to my question - would it be better to learn the language needed or bring in a coder/developer for the back end stuff.

    Read the article

  • Layout Columns - Equal Height

    - by Kyle
    I remember first starting out using tables for layouts and learned that I should not be doing that. I am working on a new site and can not seem to do equal height columns without using tables. Here is an example of the attempt with div tags. <div class="row"> <div class="column">column1</div> <div class="column">column2</div> <div class="column">column3</div> <div style="clear:both"></div> </div> Now what I tried with that was doing making columns float left and setting their widths to 33% which works fine, I use the clear:both div so that the row would be the size of the biggest column, but the columns will be different sizes based on how much content they have. I have found many fixes which mostly involve css hacks and just making it look like its right but that's not what I want. I thought of just doing it in javascript but then it would look different for those who choose to disable their javascript. The only true way of doing it that I can think of is using tables since the cells all have equal heights in the same row. But I know its bad to use tables. After searching forever I than came across this: http://intangiblestyle.com/lab/equal-height-columns-with-css/ What it seems to do is exactly the same as tables since its just setting its display exactly like tables. Would using that be just as bad as using tables? I honestly can't find anything else that I could do. edit @Su' I have looked into "faux columns" and do not think that is what I want. I think I would be able to implement better designs for my site using the display:table method. I posted this question because I just wasn't sure if I should since I have always heard its bad using tables in website layouts.

    Read the article

  • Rules Manager and Expression Filter getting removed

    - by Mike Dietrich
    I doubt that many people are using the Oracle features "Rules Manager" and "Expression Filter" as usually people handle these things (such as ensuring that a zip code or a car number plate has a certain format) within the application code and not inside the database. Oracle Beehive for instance uses that just on the side.  Anyway, just learned today that Rules Manager and Expression Filter components will get removed once our next database release most likely called Oracle Database 12c will get released. So before upgrading to Oracle Database 12c you can remove EXF and RUL components (SELECT COMP_ID FROM DBA_REGISTRY WHERE COMP_ID IN ('EXF','RUL'); ). You'd simply do that by executing the following script before upgrade:SQL> @?/rdbms/admin/catnoexf.sqlThis will clean up Rules Manager and Expression Filter components inside the database. You could run ?/rdbms/admin/catnorul.sql before but I believe catnoexf.sql will clean up everything already. And you'll find all this information plus guidelines for migration of existing content in MOS Note: 1233535.1 - Obsolescence Notice: Rules Manager and Expression Filter Features of Oracle Database -M.

    Read the article

  • TechEd 2012: MVVM In XAML

    - by Tim Murphy
    Paul Sheriff was a real character at the start of his MVVM in XAML session.  There was a lot of sarcasm and self deprecation going on prior to the .  That is never a bad way to get things rolling right after lunch.  Then things got semi-serious. The presentation itself had a number of surprises, but not all of them had to do with XAML.  When he flipped over his company’s code generation tool it took me off guard.  I am used to generator that create code for a whole project, but his tools were able to create different types of constructs on demand.  It also made it easier to follow what he was doing than some of the other demos I have seen this week where people were using code snippets. Getting to the heart of the topic I found myself thinking that I may have found my utopia for application development in MVVM.  Yes, I know there is no such thing, but this comes closer than any other pattern I have learned about.  This pattern allows the application to have better separation of concerns than I have seen before.  This is especially true since you can leverage data binding.  I’m not sure why it has taken me so long to find time for this subject. As Paul demonstrated using this pattern with XAML gives you multi-platform reusable code when you leverage common utility classes and ModelView classes.  The one drawback I see is that you have to go to the lowest common denominator between the platforms you want to support, but you always have to weigh the trade offs. And finally, the Visual Studio nuggets just keep coming.  Even though it has been available for several generations of Visual Studio I have never seen someone use linked files within a solution.  It just goes to show that I should spend more time exploring the deeper features of each dialog. del.icio.us Tags: TechEd,TechEd 2012,MVVM,Paul Sheriff,Patterns,Visual Studio 2012

    Read the article

  • How should I start refactoring my mostly-procedural C++ application?

    - by oob
    We have a program written in C++ that is mostly procedural, but we do use some C++ containers from the standard library (vector, map, list, etc). We are constantly making changes to this code, so I wouldn't call it a stagnant piece of legacy code that we can just wrap up. There are a lot of issues with this code making it harder and harder for us to make changes, but I see the three biggest issues being: Many of the functions do more (way more) than one thing We violate the DRY principle left and right We have global variables and global state up the wazoo. I was thinking we should attack areas 1 and 2 first. Along the way, we can "de-globalize" our smaller functions from the bottom up by passing in information that is currently global as parameters to the lower level functions from the higher level functions and then concentrate on figuring out how to removing the need for global variables as much as possible. I just finished reading Code Complete 2 and The Pragmatic Programmer, and I learned a lot, but I am feeling overwhelmed. I would like to implement unit testing, change from a procedural to OO approach, automate testing, use a better logging system, fully validate all input, implement better error handling and many other things, but I know if we start all this at once, we would screw ourselves. I am thinking the three I listed are the most important to start with. Any suggestions are welcome. We are a team of two programmers mostly with experience with in-house scripting. It is going to be hard to justify taking the time to refactor, especially if we can't bill the time to a client. Believe it or not, this project has been successful enough to keep us busy full time and also keep several consultants busy using it for client work.

    Read the article

  • What's the canonical process for backing up a website?

    - by Walkerneo
    This is going to sound terrible, but bear with me. I currently have a cron job that does a mysql dump, a git add all and commit, and a git push to bitbucket. I set this up almost a year ago, when I didn't know much about git, backups, and general web development and administration. I haven't had the time to fix this and do it properly, but the repo has now grown quite big from accumulating large temporary files from my forum, so now I have to do something and I want to do it properly this time around. What processes do semi-large websites and personal site admins use for backing up server content? Based on what I've learned since I set this up, what I'm currently think of doing is: Making changes on a development domain and committing the code frequently Archiving the entire site after a successful deployment from the development domain Having automatic daily database and user-content backups. I still like the idea of backing up sqldumps with git, though. I know git isn't a backup tool and that this is beyond its purpose, but the textual queries that are exported would be easily managed by git and would save a lot of space in archives.

    Read the article

  • Is there a massive other side to software development which I've somehow missed, revolving entirely around Microsoft?

    - by Aerovistae
    I'm still a beginning programmer; I've been at it for 2 years. I've learned to work with a few languages, a bit of web development technologies, a handful of libraries, frameworks, and IDEs. But over the past two years (and long before I even started, really), I keep hearing references to these...things. A million of them. Things such as C#, ADO, SOAP, ASP, ASP.NET, the .NET framework, CLR, F#, etc etc. And I've read their Wikipedia articles, in-depth, multiple times, and they all mention a million other things on that list, but I just can't seem to grasp what it all is. The only thing I've taken away with any certainty is that Microsoft is behind all of it. It sounds almost like a conspiracy. Are all these technologies just for developing on the Windows platform? What is .NET? Do some software developers dedicate their entire career just to that side of things? Why would I want to get into it, and what advantage does...whatever it is...have over all the other technologies there are? I hope this makes sense. It's a broad question, but inside it there's a very specific question asking about something I don't know the name of. Hopefully you can grasp my confusion.

    Read the article

  • How can state changes be batched while adhering to opaque-front-to-back/alpha-blended-back-to-front?

    - by Sion Sheevok
    This is a question I've never been able to find the answer to. Batching objects with similar states is a major performance gain when rendering many objects. However, I've been learned various rules when drawing objects in the game world. Draw all opaque objects, front-to-back. Draw all alpha-blended objects, back-to-front. Some of the major parameters to batch by, as I understand it, are textures, vertex buffers, and index buffers. It seems that, as long as you are adhering to the above two rules, there's little to be done in regards to batching. I see one possibility to batch, while still adhering to the above two rules. Opaque objects can still be drawn out of depth-order, because drawing them front-to-back is merely a fillrate optimization, meanwhile state changes may very well be far more expensive than the overdraw of drawing out of depth-order. However, non-opaque objects, those that require alpha-blending at least, must be drawn back-to-front in order to avoid rendering artifacts. Is the loss of the fillrate optimization for opaques worth the state batching optimization?

    Read the article

  • One good reason for a rewrite

    - by Supermighty
    I have a personal web project I cut my teeth on learning how to program. I wrote it in PHP and learned as I went. I eventually I re-factored it to use MVC and removed all mixing of php/html. Right now it has no users, save myself, and it makes no money. I have a strong desire to rewrite the entire app. Which really isn't that large of an app. I have a lot of reasons why I should not rewrite it. I know that I should move forward. It's a working app now and it will only set me back to rewrite it. But I can't shake this feeling that I would be better off using a different programming language in the long run. That I'd enjoy it more. That I'd feel comfortable with it. I feel like my one good reason to rewrite my app is that I have a gut feeling that I should. PHP seems like a hack thrown together. I want to use a language that feels more elegant to me. Any feedback you have would be welcome.

    Read the article

  • Using Content Analytics for More Effective Engagement

    - by Kellsey Ruppel
    Using Content Analytics for More Effective Engagement: Turning High-Volume Content into Templates for Success By Mitchell Palski, Oracle WebCenter Sales Consultant Many organizations use Oracle WebCenter Portal to develop these basic types of portals: Intranet portals used for collaboration, employee self-service, and company communication Extranet portals used by customers and partners for self-service and support Team collaboration portals that allow users to share documents and content, track activity, and engage in discussions Portals are intended to provide a personalized, single point of interaction with web-based applications and information. The user experiences that a Portal is capable of displaying should be relevant to an individual user or class of users (a group or role). The components of a Portal that would vary based on a user’s identity include: Web content such as images, news articles, and on-screen instruction Social tools such as threaded discussions, polls/surveys, and blogs Document management tools to upload, download, and edit files Web applications that present data visualizations and data entry modules These collections of content, tools, and applications make up valuable workspaces. The challenge that a development team may have is defining which combinations are the most effective for its users. No one wants to create and manage a workspace that goes un-used or (even worse) that is used but is ineffective. Oracle WebCenter Portal provides you with the capabilities to not only rapidly develop variations of portals, but also identify which portals are the most effective and should be re-used throughout an enterprise. Capturing Portal AnalyticsOracle WebCenter Portal provides an analytics service that allows administrators and business users to track and analyze portal usage. These analytics are captured in the form of: Usage tracking metrics Behavior tracking User Profile Correlation The out-of-the-box task reports that come with Oracle WebCenter Portal include: WebCenter Portal Traffic Page Traffic Login Metrics Portlet Traffic Portlet Response Time Portlet Instance Traffic Portlet Instance Response Time Search Metrics Document Metrics Wiki Metrics Blog Metrics Discussion Metrics Portal Traffic Portal Response Time By determining the usage and behavior tracking metrics that are associated with specific user profiles (including groups and roles), your administrators will be able to identify the components of your solution that are the most valuable.  Your first step as an administrator should be to identify the specific pages and/or components are used the most frequently. Next, determine the user(s) or user-group(s) that are accessing those high-use elements of a portal. It is also important to determine patterns in high-usage and see if they correlate to a specific schedule. One of the goals of any development team (especially those that are following Agile methodologies) should be to develop reusable web components to minimize redundant development. Oracle WebCenter Portal provides you the tools to capture the successful workspaces that have already been developed and identified so that they can be reused for similar user demographics. Re-using Successful PortalsWhen creating a new Portal in Oracle WebCenter, developers have the option to base that portal on a template that includes: Pre-seeded data such as pages, tools, user roles, and look-and-feel assets Specific sub-sets of page-layouts, tools, and other resources to standardize what is added to a Portal’s pages Any custom components that your team creates during development cycles Once you have identified a successful workspace and its most valuable components, leverage Oracle WebCenter’s ability to turn that custom portal into a portal template. By creating a template from your already successful portal, you are empowering your enterprise by providing a starting point for future initiatives. Your new projects, new teams, and new web pages can benefit from lessons learned and adjustments that have already been made to optimize user experiences instead of starting from scratch. ***For a complete explanation of how to work with Portal Templates, be sure to read the Fusion Middleware documentation available online.

    Read the article

  • An entry-level programmer's best option [on hold]

    - by user134409
    I am facing a puzzle and I am not sure the best way to make a decision. In my spare time besides playing video games I got around to develop some games, nothing fancy, just small projects to get a better grasp at programming. After I finished college and got my BA in Computer Science, I got a job as web developer at a small firm. The next few months were very stressful as I had no previous experience and tried my best to make up for it. But after 6 months my boss told me I was inefficient and not very independent and let me go. To my credit, the help from the senior was very limited, I did learn a lot but I have learned by myself. For example they told me to do a UI in BackboneJS and I took me a while but I got it working (even if it was poorly designed). But I managed to do it all by myself because my senior was very busy and he did not have time even for my questions. Now I have found a new job again in web development but I am very afraid of what is going to happen next. I am afraid because I don't want to take the job and then be fired again after a couple of months, I get the feeling that this will be very bad on my CV, job hopping is like a red flag. They want to hire me but I am aware that they are working with new technologies and maybe I will end up not coping with it. So the question is: Should a entry-level programmer be better off with a starting job in QA, testing and work his way from there? I did learn allot from my first job but it was a moral blow when they decided to fire me. I do have a low self-esteem and I know my skills as a programmer are not that great. But I like programming and want to get better and I want to have a long career in it so that basically my pickle. Thank you in advance for the answers.

    Read the article

  • Diving into a computer science career [closed]

    - by Willis
    Well first I would like to say thank you for taking the time to read my question. I'll give you some background. I graduated two years ago from a local UC in my state with a degree in cognitive psychology and worked in a neuroscience lab. During this time I was exposed to some light Matlab programming and other programming tidbits, but before this I had some basic understanding of programming. My father worked IT for a company when I was younger so I picked up his books and took learned things along the way growing up. Naturally I'm an inquisitive person, constantly learning, love challenges, and have had exposure to some languages. Yet at this point I was fully pursue it as a career and always had this in the back of my head. Where do I start? I'm 25 and feel like I still have time to make a switch. I've immersed myself in the terminal/command prompt to start, but which language do I focus on? I've read the A+ book and planning to take on the exam, then the networking exam, but I want to deal with more programming, development, and troubleshooting. I understand to get involved in open source, but where? I took the next step and got a small IT assistant job, but doesn't really deal with programming, development, just troubling shooting and small network issues. Thank you!

    Read the article

  • loss of sound in ubuntu 12.04

    - by Leo Simon
    I'm running Linux E6520 3.2.0-56-generic #86-Ubuntu SMP Wed Oct 23 09:20:45 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux on a Dell Latitude E6530. (This is a new machine; have run the same version of linux on an older machine for a year, without this happening.) I've been losing sound regularly, though have not been able to isolate the trigger for this. I've scoured the web on this subject, in particular https://help.ubuntu.com/community/SoundTroubleshootingProcedure and Audio stopped working suddenly in 12.04 Nothing from the first site seemed to work for me. From the second site, I learned enough to be able to fix the problem when it happens, but nothing on the web has helped me figure out why the problem is happening in the first place. Patching together stuff from the web, and with some blind luck, I've found that the following steps seem to restore sound pulseaudio --kill pulseaudio --start pavucontrol -> output devices Click on the "Mute audio" icon, which mutes audio Click on the "Mute audio" icon, which unmutes audio. This obviously doesn't make sense: audio wasn't muted in the first place, but somehow, magically, toggling mute audio off and on seems to reset something. Can anybody suggest from this information why sound would be disappearing in the first place (it seems as though something is getting muted at the system level, but I don't know what)? a simpler (command-line/script) way of restoring sound, in particular, is it possible to reset pavucontrol from the commandline? Some other pieces of information that may be of use: The problem is clearly happening at the system level, since I've set up a clean new user, and this user has the same problems that I do. So user fixes like deleting the .pulse directory aren't (and don't) help. Sound works fine in Windows (dual-boot) so it's not a hardware problem Any help/suggestions on this would be most appreciated.

    Read the article

  • Remember me or not?

    - by taeja87
    I was told to post this on webmasters instead of stackoverflow. Is it safe to have the remember me feature? Would it be somewhat safe (knowing it won't be 100% safe) to allow users to close their browser and come back still logged in? I am not exacting sure which way I should go after reading different things about safety. I learned about session fixation and implemented security to add more protection. From experience, if remember me is checked then only your username/email appears and requires you to re-enter your password. Other sites allow you to come in and out as much as you way without logging out after the browser has closed. If it is safe, what is the current best way of implementing remember/stay logged in? http://stackoverflow.com/questions/3531377/best-practise-for-remember-me-feature http://stackoverflow.com/questions/5087969/what-is-the-code-for-stay-logged-in-or-remember-me-while-user-login-in-php http://bytes.com/topic/php/answers/881197-stay-logged-remember-me-php-sessions-cookies http://security.stackexchange.com/questions/41/good-session-practices Also: The site I am working on is email & password login type.

    Read the article

  • Is Ruby on Rails supposed to have a steep learning curve or is it just me?

    - by Anita
    I'm a self-taught programmer. I've been learning RoR since October with varying intensity (sometimes all day, sometimes nothing for several weeks). Before that I knew only Java, but knew it pretty well. I've heard so much hype about RoR and how it's supposed to make you happy, productive, etc. So far it's only made me frustrated. I learned it out of the Agile book, and I suspect part of the difficulty might have to do with my not knowing JavaScript and CSS, and having only a shaky grasp of databases and HTML. But apparently it took me much longer to complete the project in the Agile book than other people, and I still don't remember much of it. There are some things about Rails that I just can't seem to get, e.g. when to use symbols and when NOT to, or how dynamic methods are called. Recently I was given a small Rails assignment where I'm asked to make a small change to the interface. It's taken me around 25 hours and although I've made some progress in understanding the code, I still have no idea how to proceed. I can't even ask Stack Overflow because there is so much code I'll have to provide to give context. So my question is in the title: is RoR supposed to take a long time to learn or am I just slow? Can it be that I've been learning from the wrong book? My learning style is such that I either understand nothing or understand everything, if that makes sense. Thanks!

    Read the article

  • Continuous Retraining Tutorials

    - by foampile
    I am looking for an online resource in which you can sortof design your future professional profile and it would provide you a set of tutorials that you would complete to get a basic level of familiarity with related technologies. One of my professional problems is my learning style: I can learn either by direct hands-on experience OR by following a rigid training program that goes in a linear progression. I have a hard time learning things in a multidimensional environment where the biggest challenge is to determine what needs to be learned and how to pick from a ton of books and the least problem is to go through the actual material. So I am looking for a reputable source that will knock those two confusing questions out for me so I can kick back and continuously be upgrading my skills without having to worry about what and how myself. I have found some decent online tutorials for various technologies but never found a single place that has all or most developer education tutorials that all follow the same or similar interface. I am kindof a lazy learner and would rather follow confirmed learning steps than be figuring my own education path just to realize I did it all wrong down the road. Is there a tutorial mega-boutique like that online?

    Read the article

  • Shoring up deficiencies in a "home grown" programmer?

    - by JohnP
    I started out by teaching myself BASIC on a Vic 20, and in college (mid 80's) I had Fortran, Pascal, limited C, machine and assembler (With a smattering of COBOL). I didn't touch programming from approx 1989 to 1999. At that point, I was lucky enough to get hired as a Clipper programmer. Took me about 6 months to learn most of it, and by now (13 yrs) I'm pretty expert in it. I have also picked up Cold Fusion, some C#, some ASP, SQL, etc. I know programming structures, but in most languages I'm missing the esoterics, and I know my code could be much tighter. The problem is that I've learned what I needed to, to get the job done. This results in a lot of gaps in practical knowledge. I am also missing out on a TON of theory. Things like SRP, Refactoring, etc are alien terms. (Although I grok the intent after a short read). In addition, I am in the position now of teaching junior programmers the company and our software, and I don't want to pass on the knowledge gaps. I know this is somewhat of a subjective question and may be closed, but how do you go back and pick up what you've missed?

    Read the article

  • What does the ".align" x86 Assembler directive do exactly? [migrated]

    - by Sinister Clock
    I will list exactly what I do not understand, and show you the parts I can not understand as well. First off, The .Align Directive .align integer, pad. The .align directive causes the next data generated to be aligned modulo integer bytes 1.~ ? : What is implied with "causes the next data generated to be aligned modulo integer bytes?" I can surmise that the next data generated is a memory-to-register transfer, no? Modulo would imply the remainder of a division. I do not understand "to be aligned modulo integer bytes"....... What would be a remainder of a simple data declaration, and how would the next data generated being aligned by a remainder be useful? If the next data is aligned modulo, that is saying the next generated data, whatever that means exactly, is the remainder of an integer? That makes absolutely no sense. What specifically would the .align, say, .align 8 directive issued in x86 for a data byte compiled from a C char, i.e., char CHARACTER = 0; be for? Or specifically coded directly with that directive, not preliminary Assembly code after compiling C? I have debugged in Assembly and noticed that any C/C++ data declarations, like chars, ints, floats, etc. will insert the directive .align 8 to each of them, and add other directives like .bss, .zero, .globl, .text, .Letext0, .Ltext0. What are all of these directives for, or at least my main asking? I have learned a lot of the main x86 Assembly instructions, but never was introduced or pointed at all of these strange directives. How do they affect the opcodes, and are all of them necessary?

    Read the article

  • What should a programmer's yearly routine be to maximize their technical skills?

    - by sguptaet
    2 years ago I made a big career change into programming. I learned various technologies on my own without any prior experience. I really love it and feel lucky with all the resources around us to help us learn. Books, courses, open-source, etc. There are so many avenues. I'm wondering what a good routine would be to follow to maximize my software development skills. I don't believe just building software is the way, because that leaves no time for learning new concepts or technologies. I'm looking for an answer like this: Take a new concept sabbatical/workshop 2 weeks per year. Read 1 theoretical and 1 practical programming book per year. Learn 1 additional language every 2 years. Take a 1 week vacation every 6 months. Etc. I realize that the above might sound naive and unrealistic as there are so many factors. But I'd like to know the "recipe" that you think is best that will serve as a guide for people.

    Read the article

  • Resume on 30 Days of SharePoint

    Dear readers, as you might have noticed... It was an organisational desaster on my end! Even though I continued my studies and research on Microsoft SharePoint 2013 during the last 30 days, I wasn't able to write an article a day to keep you posted on my progress. Nonetheless, I gathered a good number of additional blogs, mainly SharePoint MVP sites, and online forums which will be helpful in the next couple of weeks while I'm actually going to develop a C#-based client which will enable an existing 'legacy' application to SharePoint as a document management system (DMS) besides other already existing solutions. Finding excuses Well, no. Not really. I simply didn't block any or enough time every day to write down my progress during my own challenge. My log book on learning about SharePoint stands at 41 hours and 15 minutes during this month. Which means that I spent an average of more than 1 hour per day on getting into SharePoint. I know that might sound a little bit low but also keep in mind that I went for the challenge on top of my daily job and private responsibilities. During the same period there had been two priority 0 incidents from clients - external root cause - which took presedence over this leisure project. More to come Anyway, it was a first trial and despite the low level of reporting on my blog, I'm confident about what I learned during the last 30 days, and I'm ready to implement the client's requirements. At least, I would say that I have a better understanding about the road map or the path to walk during the next month. As time and secrecy allows I'm going to note down some bits and pieces... During the process of development, I'm going to 'cheat' on the challenge summary article and add links to those new entries. Just for the sake of completeness. Next challenge? Hmm, there had been ideas during the last meetup of the Mauritius Software Craftsmanship Community (MSCC) regarding certifications in IT and eventually we might organise some kind of a study group for specific exams, most probably Microsoft exams towards MCSD Web Developer or Windows Developer.

    Read the article

  • Why does there seem to be a lot of fear in choosing the "wrong" language to learn?

    - by Shewbox
    Perhaps its just me, but as a current CS student I have already come across many questions on this site and elsewhere about not just "Which language should I use for x?" but also "Does anyone still use language Y?" My first CS class was taught in Scheme, which, if I'm not mistaken, isn't used widely (at least in comparison to languages like Java, PHP, Python, etc). Many of my classmates balked at the idea of having to learn a language they would never have to use again, but I don't quite understand where so much of this fear of learning less popular languages comes from. No, I may not use Scheme in any job I get, but I certainly don't regret having learned to use it (albeit in a very beginner, not very in-depth manner in that one semester). I am taking a search engines class this semester, which is done in Perl and again I am seeing classmates complaining about the language choice. I can understand having a favorite language and disliking others but why do some get worked up over learning it in the first place? Can you really learn the "wrong" language? Isn't learning something like Scheme or Haskell good mental exercise if nothing else, and useful at least to exposure to different ways of solving problems?

    Read the article

  • Making a Living Developing Games

    - by cable729
    I'm in my last year of high school, and I've been looking at colleges. I'm taking a C++ class at a local community college and I don't feel that it's worth it. I could have learned everything in that class in a week. This had me thinking, would a CS degree even be worth it? How much can it teach me if I can learn everything on my own? Even if I do need to learn more advanced subjects, many colleges put their material online AND I can buy a book. Will companies hire me if I don't have a CS degree? If I have a portfolio will I stand a chance? What kind of things are needed in the portfolio? I want to live doing what I love - programming. So I will do it. I'm just not sure that a CS degree will do anything to me. In addition, if there is a benefit to getting a CS degree, what places are the best?

    Read the article

  • Is it reasonable to null guard every single dereferenced pointer?

    - by evadeflow
    At a new job, I've been getting flagged in code reviews for code like this: PowerManager::PowerManager(IMsgSender* msgSender) : msgSender_(msgSender) { } void PowerManager::SignalShutdown() { msgSender_->sendMsg("shutdown()"); } I'm told that last method should read: void PowerManager::SignalShutdown() { if (msgSender_) { msgSender_->sendMsg("shutdown()"); } } i.e., I must put a NULL guard around the msgSender_ variable, even though it is a private data member. It's difficult for me to restrain myself from using expletives to describe how I feel about this piece of 'wisdom'. When I ask for an explanation, I get a litany of horror stories about how some junior programmer, some-year, got confused about how a class was supposed to work and accidentally deleted a member he shouldn't have (and set it to NULL afterwards, apparently), and things blew up in the field right after a product release, and we've "learned the hard way, trust us" that it's better to just NULL check everything. To me, this feels like cargo cult programming, plain and simple. A few well-meaning colleagues are earnestly trying to help me 'get it' and see how this will help me write more robust code, but... I can't help feeling like they're the ones who don't get it. Is it reasonable for a coding standard to require that every single pointer dereferenced in a function be checked for NULL first—even private data members? (Note: To give some context, we make a consumer electronics device, not an air traffic control system or some other 'failure-equals-people-die' product.) EDIT: In the above example, the msgSender_ collaborator isn't optional. If it's ever NULL, it indicates a bug. The only reason it is passed into the constructor is so PowerManager can be tested with a mock IMsgSender subclass.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >