Search Results

Search found 1997 results on 80 pages for 'early'.

Page 41/80 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Simple algorithm for a sudoku solver java

    - by user142050
    just a quick note first, I originally asked this question on stack overflow but was refered here instead. I've been stuck on this thing for a while, I just can't wrap my head around it. For a homework, I have to produce an algorithm for a sudoku solver that can check what number goes in a blank square in a row, in a column and in a block. It's a regular 9x9 sudoku and I'm assuming that the grid is already printed so I have to produce the part where it solves it. I've read a ton of stuff on the subject I just get stuck expressing it. I want the solver to do the following: If the value is smaller than 9, increase it by 1 If the value is 9, set it to zero and go back 1 If the value is invalid, increase by 1 I've already read about backtracking and such but I'm in the early stage of the class so I'd like to keep it as simple as possible. I'm more capable of writing in pseudo code but not so much with the algorithm itself and it's the algorithm that is needed for this exercise. Thanks in advance for your help guys.

    Read the article

  • People != Resources

    - by eddraper
    Ken Tabor’s blog post “They Are not Resources – We Are People” struck a chord with me.  I distinctly remember hearing the term “resources” within the context of “people” for the first time back in the late 90’s.  I was in a meeting at Compaq and a manager had been faced with some new scope for an IT project he was managing.  His response was that he needed more “resources” in order to get the job done.  As I knew the timeline for the project was fixed and the process for acquiring additional funding would almost certainly extend beyond his expected delivery date, I wondered what he meant.  After the meeting, I asked him what he meant… his response was that he needed some more “bodies” to get the job done.  For a minute, my mind whirred… why is it so difficult to simply say “people?”  This particular manager was neither a bad person nor a bad manager… quite the contrary.  I respected him quite a bit and still do.  Over time, I began to notice that he was what could be termed an “early adopter” of many “Business speak” terms – such as “sooner rather than later,” “thrown a curve,” “boil the ocean” etcetera.  Over time, I’ve discovered that much of this lexicon can actually be useful, though cliché and overused.  For example, “Boil the ocean” does serve a useful purpose in distilling a lot of verbiage and meaning into three simple words that paint a clear mental picture.  The term “resources” would serve a similar purpose if it were applied to the concept of time, funding, or people.  The problem is that this never happened.  “Resources”, “bodies”, “ICs” (individual contributors)… this is what “people” have become in the IT business world.  Why?  We’re talking about simple word choices here.  Why have human beings been deliberately dehumanized and abstracted in this manner? What useful purpose does it serve other than to demean and denigrate?

    Read the article

  • Share home directory between Linux and Windows dual boot

    - by user877329
    This question is somewhat similar to How to use Windows Share has home directory, but in this case Windows is not running. I have installed a dual-boot configuration with Ubuntu 12.04 and Windows. My Windows partition is mounted on /C. Now I want either Ubuntu to locate home directories in /C/Users Which is the location of windows accounts or I want Windows to use D:\home for home directories. (D is the name of the Ubuntu root directory). For the first approach, I have managed to create a test user account test-user:x:1004:1001:Test:/C/Users/test-user:/bin/bash The account works but test-user cannot run any X session. From .xsession-errors chmod: Changing rights on ”/C/Users/test-user/.xsession-errors”: Operation not permitted Would it help get rid of that chmod, which has no effect? How do I? If I use the second approach, I need the Ext2fsd driver, which seems to work, but I am not sure if Windows maps the Ext2 system that early. Here is my fstab proc /proc proc nodev,noexec,nosuid 0 0 UUID=e7cef061-ed8d-4a82-b708-0c8f4c6f297f / ext3 errors=remount-ro 0 1 UUID=2CDCEB43DCEB0644 /C ntfs defaults,umask=007,gid=46 0 0 UUID=b087b5c0-b4bd-47e7-8d34-48ad9b192328 none swap sw 0 0 Update: I found something here: http://www.tuxera.com/community/ntfs-3g-advanced/ Will work if i do a correct mapping between NT users and Linux users.

    Read the article

  • Oracle OpenWorld 2012 Hands-on Lab: “Leading Your Everyday Application Integration Projects with Enterprise SOA”

    - by Lionel Dubreuil
    Sharpen your Oracle skill sets and master Oracle technology in Oracle OpenWorld Hands-on Labs.In self-paced, practical learning sessions covering everything from business applications to middleware, database, storage, and enterprise management solutions, you'll discover new ways to derive maximum benefits from your Oracle hardware and software solutionsOracle experts will be available in person to answer questions and guide you through each lab.Hands-on Labs fill up early, and seats are limited, so don’t be late.This  HOL10093 - Leading Your Everyday Application Integration Projects with Enterprise SOA is scheduled for: Date: Monday, Oct 1 Time: 10:45 AM - 11:45 AM Location: Marriott Marquis - Salon 5/6 In this Hands-on Lab, Experience firsthand how Oracle Enterprise Repository, Oracle Application Integration Architecture (AIA) Foundation Pack, and Oracle SOA Suite work together to help you drive your enterprisewide integration projects.From asset management, discovery, and management in Oracle Enterprise Repository to integration of content in Oracle AIA Foundation Pack operating on the Oracle SOA Suite platform, discover how you can develop integrations to support business agility.Take advantage of Oracle-delivered integration assets and validate your services for compliance, within Oracle JDeveloper. You will get your hands on the tools and talk with Oracle experts in this hands-on lab.Objectives for this session are to: Use Oracle Enterprise Repository to manage application interfaces, composite applications, and business processes See how Oracle Enterprise Repository can benefit every service-based application integration project Learn how to govern services through the software lifecycle and validate your services for compliance

    Read the article

  • Oracle OpenWorld 2012 Hands-on Lab: “Leading Your Everyday Application Integration Projects with Enterprise SOA”

    - by Lionel Dubreuil
    Sharpen your Oracle skill sets and master Oracle technology in Oracle OpenWorld Hands-on Labs.In self-paced, practical learning sessions covering everything from business applications to middleware, database, storage, and enterprise management solutions, you'll discover new ways to derive maximum benefits from your Oracle hardware and software solutionsOracle experts will be available in person to answer questions and guide you through each lab.Hands-on Labs fill up early, and seats are limited, so don’t be late.This  HOL10093 - Leading Your Everyday Application Integration Projects with Enterprise SOA is scheduled for: Date: Monday, Oct 1 Time: 10:45 AM - 11:45 AM Location: Marriott Marquis - Salon 5/6 In this Hands-on Lab, Experience firsthand how Oracle Enterprise Repository, Oracle Application Integration Architecture (AIA) Foundation Pack, and Oracle SOA Suite work together to help you drive your enterprisewide integration projects.From asset management, discovery, and management in Oracle Enterprise Repository to integration of content in Oracle AIA Foundation Pack operating on the Oracle SOA Suite platform, discover how you can develop integrations to support business agility.Take advantage of Oracle-delivered integration assets and validate your services for compliance, within Oracle JDeveloper. You will get your hands on the tools and talk with Oracle experts in this hands-on lab.Objectives for this session are to: Use Oracle Enterprise Repository to manage application interfaces, composite applications, and business processes See how Oracle Enterprise Repository can benefit every service-based application integration project Learn how to govern services through the software lifecycle and validate your services for compliance

    Read the article

  • World Location issues with camera and particle

    - by Joe Weeks
    I have a bit of a strange question, I am adapting the existing code base including the tile engine as per the book: XNA 4.0 Game Development by example by Kurt Jaegers, particularly the aspect that I am working on is the part about the 2D platformer in the last couple of chapters. I am creating a platformer which has a scrolling screen (similar to an old school screen chase), I originally did not have any problems with this aspect as it is simply a case of updating the camera position on the X axis with game time, however I have since added a particle system to allow the players to fire weapons. This particle shot is updated via the world position, I have translated everything correctly in terms of the world position when the collisions are checked. The crux of the problem is that the collisions only work once the screen is static, whilst the camera is moving to follow the player, the collisions are offset and are hitting blocks that are no longer there. My collision for particles is as follows (There are two vertical and horizontal): protected override Vector2 horizontalCollisionTest(Vector2 moveAmount) { if (moveAmount.X == 0) return moveAmount; Rectangle afterMoveRect = CollisionRectangle; afterMoveRect.Offset((int)moveAmount.X, 0); Vector2 corner1, corner2; // new particle world alignment code. afterMoveRect = Camera.ScreenToWorld(afterMoveRect); // end. if (moveAmount.X < 0) { corner1 = new Vector2(afterMoveRect.Left, afterMoveRect.Top + 1); corner2 = new Vector2(afterMoveRect.Left, afterMoveRect.Bottom - 1); } else { corner1 = new Vector2(afterMoveRect.Right, afterMoveRect.Top + 1); corner2 = new Vector2(afterMoveRect.Right, afterMoveRect.Bottom - 1); } Vector2 mapCell1 = TileMap.GetCellByPixel(corner1); Vector2 mapCell2 = TileMap.GetCellByPixel(corner2); if (!TileMap.CellIsPassable(mapCell1) || !TileMap.CellIsPassable(mapCell2)) { moveAmount.X = 0; velocity.X = 0; } return moveAmount; } And the camera is pretty much the same as the one in the book... with this added (as an early test). public static void Update(GameTime gameTime) { position.X += 1; }

    Read the article

  • Is ZeroMQ a good choice to make a Python app and a C# managed assembly work together?

    - by Alex Bausk
    I have a task that involves talking to a .NET-based API (namely AutoCAD) to retrieve data, send commands, and react to events. I want to separate the API operations and the proper program logic (largely already implemented in Python) by using natural tools for both: a C# DLL for the former and a Python app for the latter. To connect these two pieces, I began exchanging JSON in ZeroMQ messages. I'm at early development stages but having recently discovered that ZeroMQ does not guarantee message delivery/order, I have reservations about whether this is a feasible way to go. Right now my app is a very basic REQ/REP pair and I plan to handle reacting to events and executing different commands by adding some sort of 'recipient-function' field to my message format. The reason that I want to use ZMQ is that I might be able to scale the software into a larger, multi-user, distributed solution sometime. I am a lay programmer so I would ask for your advice about this architecture. Should I just go ahead with it and plan to deal with message reliability/ordering when problems appear? Should I consider developing some kind of a REST wrapper around ZMQ?

    Read the article

  • How to sync client and server at the first frame

    - by wheelinlight
    I'm making a game where an authoritative server sends information to all clients about states and positions for objects in a 3d world. The player can control his character by clicking on the screen to set a destination for the character, much like in the Diablo series. I've read most information I can find online about interpolation, reconciliation, and general networking architecture (Valve's for instance). I think I understand everything but one thing seems to be missing in every article I read. Let say we have an interpolation delay of 100ms, server tickrate=50ms, latency=200ms; How do I know when 100ms has past on the client? If the server sends the first update on t=0, can I assume it arrives at t=200, therefore assuming that all packets takes the same amount of time to reach the client? What if the first packet arrives a little quick, for instance at t=150. I would then be starting the client with t=150 and at t=250 it will think it has past 100ms since its connect to the server when it in fact only 50ms has past. Hopefully the above paragraph is understandable. The summarized question would be: How do I know at what tick to start simulating the client? EDIT: This is how I ended up doing it: The client keeps a clock (approximately) in sync with the server. The client then simulates the world at simulationTime = syncedTime - avg(RTT)/2 - interpolationTime The round-trip time can fluctuate so therefore I average it out over time. By only keeping the most recent values when calculating the average I hope to adapt to more permanent changes in latency. It's still to early to draw any conclusion. I'm currently simulating bad network connections, but it's looking good so far. Anyone see any possible problems?

    Read the article

  • About C# objects and the possibilties it has

    - by user527825
    As a novice programmer and I always wonder about c# capabilities. I know it is still early to judge that but all i want to know is can c# do complex stuffs or something outside windows OS. I think C# is a proprietary language (I don't know if I said that right) meaning you can't do it outside Visual Studio or Windows. Also you can't create your own controller (called object right?) like you are forced to use these available in toolbox and their properties and methods. Can C# be used with OpenGL API or DirectX API Finally it always bothers me when I think I start doing things in Visual Studio, I know it sounds arrogant to say but sometimes I feel that I don't like to be forced to use something even if its helpful, like I feel (do I have the right to feel?) that I want to do all things by myself? Don't laugh I just feel that this will give me a better understanding. Is Visual C# like using MaxScript inside 3ds max in that C# is exclusive to do Windows and Forms and Components that are Windows related and maxscript is only for 3d editing and manipulation for various things in the software. If it is too difficult for a beginner I hope you don't answer the fourth question as I don't have enough motivation and I want to keep the little I have. Note: Sorry for my English, I am self taught and never used the language with native speakers so expect so errors. I have a lot of questions regarding many things, what is the daily ratio you think for asking (number of questions) that would not bother the admins of the site and the members here. Thank you for your time.

    Read the article

  • Sneak Preview - New CodePlex UI

    We have been busy the last several months working to improve the overall experience for the CodePlex community. We have been working through some of the top requested items, such as our big announcement last week enabling Git. Something that is not explicitly on the feature request list are requests to update the web site look and user experience.  As Brian Harry mentioned, the Future of CodePlex is Bright, so it is time to start brightening up the place. Goals As with any sizeable change you need to decide the scope of changes you want to tackle. We decided that we would optimize on incremental improvements verses taking months to get a completely new experience released. Our goals with this user experience work is to refresh the look and feel of the site, introduce new visual elements and set up the site for future structural changes. So this is not the end, just the beginning. Early Views I want to set a few expectations first, these screen shots are not final, and we are still working through the content and final element placement. Feedback is always welcome, just take that in mind as you review the images. New CodePlex Home The navigation changed a good bit on the home page and we have moved the search to a more consistent location across the site.   User Profile Users Home Page The goal was to make it easier to find and take action on common tasks such as creating projects. Project Home Issue Tracker   This should give you a taste of where we are going with the new user experience.     As always we love the feedback, either comment below, find us on Twitter @codeplex or @mgroves84, or create or vote up suggestions.

    Read the article

  • LINQ to Twitter Maintenance Feedback

    - by Joe Mayo
    Originally posted on: http://geekswithblogs.net/WinAZ/archive/2013/06/16/linq-to-twitter-maintenance-feedback.aspxIt’s always fun to receive positive feedback on your work. If you receive a sufficient amount of positive feedback, you know you’re doing something right. Sometimes, people provide negative feedback too. There are a couple ways to handle it: come back fighting or engage for clarification. The way you handle the negative feedback depends on what your goals are. Feedback Approaches If you know the feedback is incorrect and you need to promote your idea or product, you might want to come back fighting. The feedback might just be comments by a troll or competitor wanting to spread FUD. However, this could be the totally wrong approach if you misjudge the source and intentions of the feedback. In a lot of cases, feedback is a golden opportunity. Sometimes, a problem exists that you either don’t know about or don’t realize the true impact of the problem. If you decide to come back fighting, you might loose the opportunity to learn something new. However, if you engage the person providing the feedback, looking for clarification, you might learn something very important. Negative feedback and it’s clarification can lead to the collection of useful and actionable data. In my case, something that prompted this blog post, I noticed someone who tweeted a negative comment about LINQ to Twitter. Normally, any less than stellar comments are usually from folks that need help – so I help if I can. This was different. I was like “Don’t use LINQ to Twitter”. This is an open source project, the comment didn’t come from a competing project, and  sounded more like an expression of frustration. So I engaged. Not only did the person respond, but I got some decent quality feedback. What’s also interesting is a couple other side conversations sprouted on the subject, which gave me more useful data. LINQ to Twitter Thread Actions Essentially, this particular issue centered around maintenance. There are actually several sub-issues at play here: dependencies, error handling, debugging, and visibility. I’ll describe each one and my interpretation. Dependencies Dependencies are where a library has references to other libraries. This means that when you build your application, you need DLLs for the entire dependency graph for your application. There are several potential problems with this that include more libraries for configuration management, potential versioning mismatches, and lack of cross-platform support. In the early days of LINQ to Twitter, I allowed developers to contribute and add dependencies, but it became very problematic (for reasons stated). It was like a ball and chain that kept me from moving forward. So, I refactored and pulled other open-source into my project to eliminate external dependencies. This lets me fix the code in my project without relying on someone else to upgrade or fix their DLL. The motivation for this was from early negative feedback that translated as important data and acted on it. Today, LINQ to Twitter has zero dependencies. Note: Rejecting good code from community members who worked hard to make your project better is a painful experience in itself. I have to point out that any contribution was not in vain because they had a positive influence on my subsequent refactoring that resulted in a better developer experience. Error Handling Error handling has been a problem in the past. I have this combination of supporting both synchronous and asynchronous (APM) processing that can be complex at times. Within the last 6 months, I did a fair amount of refactoring to detect errors and process them properly. I also refactored TwitterQueryException so it includes important data from Twitter. During this refactoring, I’ve made breaking changes that I felt would improve the development experience (small things like renaming a callback property to Exception, rather than Error). I think the async error handling is much better than it was a year ago. For all the work I’ve done, there is more to do. I think that a combination of more error handling support, e.g. improving semantics, and education through documentation and samples will improve the error handling story. Because of what I’ve done so far, it isn’t bad, but I see opportunities for improvement. Debugging Debugging can be painful. Here’s why: you have multiple layers of technology to navigate and figure out where the real problem is – Twitter API, Security, HTTP, LINQ to Twitter, and application. You can probably add your own nuances to that list, but the point is that debugging in this environment can be complex. I think that my plans for error handling will contribute to making the debugging process easier. However, there’s more I can do in the way of documentation and guidance. Some of the questions to be answered revolve around when something goes wrong, how does the developer figure out that there is a problem, what the problem is, and what to do about it. One example that has gone a long way to helping LINQ to Twitter developers is the 401 FAQ. A 401 Unauthorized is the error that the Twitter API returns when a use isn’t able to authenticate and is one of the most difficult problems faced by LINQ to Twitter developers. What I did was read guidance from Twitter and collect techniques from my own development and actions helping other developers to compile an extensive list of reasons for the 401 and ways to fix the problem. At one time, over half of the questions I answered in the forums were to help solve 401 issues. After publishing the 401 FAQ, I rarely get a 401 question and it’s because the person didn’t know about the FAQ. If the person is too lazy to read the FAQ, that’s not my issue, but the results in support issues have been dramatic. I think debugging can benefit from the education and documentation approach, but I’m always open to suggestions on whatever else I can do. Visibility Visibility is a nuance of the error handling/debugging discussion but is deeply rooted in comfort and control. The questions to ask in this area are what is happening as my code runs and how testable is the code. In support of these areas, LINQ to Twitter does have logging and TwitterContext properties that help see what’s happening on requests. The logging functionality allows any developer to connect a TextWriter to the Log property of TwitterContext to see what’s happening. Further, TwitterContext has a Headers property to see the headers Twitter returns and a RawResults property to show the Json string Twitter returns. From a testing perspective, I’ve been able to write hundreds of unit tests, over 600 when this post is published, and growing. If you write your own library, you have full control over all of these aspects. The tradeoff here is that while you have access to the LINQ to Twitter source code and modify it for all the visibility, LINQ to Twitter *will* change (which is good) and you will have to figure out how to merge that with your changes (which is hard). The fact is that this is a limitation of any 3rd party library, not just LINQ to Twitter. So, it’s a design decision where the tradeoff is between control and productivity. That said, there are things I can do with LINQ to Twitter to make the visibility story more compelling. I think there are opportunities to improve diagnostics. This would be a ton of work because it would need to provide multi-level logging that can be tuned for production and support any logging provider you want to attach. I’ve considered approaches such as how the new Semantic Logging application block connects to Windows Error Reporting as a potential target. Whatever I do would need to be extensible without creating native external dependencies. e.g. how many 3rd party libraries force a dependency on a logging framework that you don’t use. So, this won’t be an easy feat, but I believe it can be part of the roadmap. I think that a lot of developers are unaware of existing visibility features, so the first step would be to provide more documentation and guidance. My thought are that this would lead to more feedback that will help improve this area. Summary Recent feedback highlights some of items that are important to LINQ to Twitter developers, such as dependencies, error handling, debugging, and visibility. I know that there are maintenance issues that have been problems for LINQ to Twitter developers in the past. I’ve done a lot of work in this area, such as improving error handling, adding visibility features, and providing extensive API documentation. That said, there is more to be done to make LINQ to Twitter the best Twitter API experience available for .NET developers and I welcome anyone’s thoughts on what I’ve written here or new improvements. @JoeMayo

    Read the article

  • Centrally managing 100+ websites without bankrupting a small company

    - by palintropos
    I'm mainly interested in opinions on the trade-offs between having a single central server all the websites connect to as opposed to each website mirroring a subset of the master database with all the products in it. For example, will I run into severe performance issues (or even security issues, or restrictions) making queries to an offsite database? Will we hit scalability issues we can't handle early on from the sheer bandwidth required to maintain this? If we do go with something like a script that keeps smaller databases (each containing a subset of the central master data) in sync, what sorts of issues will we likely encounter there? I would really like the opinions of people far more knowledgeable than I am regarding the pros and cons of both setups and what headaches we are likely to encounter. CLARIFICATION: This should not be viewed as a question about whether we should implement one database vs multiple databases. This question has been answered numerous times. The question is regarding the pros and cons for a deployment like this having the ability to manage all the websites centrally (one server) vs trying to keep them all in sync if they each have their own db (multiple servers). REAL-WORLD EXAMPLE: We are a t-shirt company, and we have individual websites for our different kinds of t-shirts, but we're looking at a central order management integrated with our single shopping cart (which is ColdFusion + MySQL). Now, let's say we have a t-shirt that's on 10 of our websites and we change an image for it. Ideally we would change that in one place and the change would propagate, but how would we set this up?

    Read the article

  • JSR Updates and EC Meeting Tuesday @ 15:00 PST

    - by Heather VanCura
    JSR 310, Date and Time API, has moved to JCP 2.9 (first JCP 2.9 JSR!) JSR 236, Concurrency Utilities for Java EE, has published an Early Draft Review. This review ends 15 December 2012.  Tomorrow, Tuesday 20 November is the last Public EC Meeting of 2012, and the first EC meeting with the merged EC. The second hour of this meeting will be open to the public at 3:00 PM PST. The agenda includes  JSR 355,  EC merge implementation report, JSR 358 (JCP.next.3) status report, JCP 2.8 status update and community audit program.  Details are below. We hope you will join us, but if you cannot attend, not to worry--the recording and materials will also be public on the JCP.org multimedia page. Meeting details Date & Time Tuesday November 20, 2012, 3:00 - 4:00 pm PST Location Teleconference Dial-in +1 (866) 682-4770 (US) Conference code: 627-9803 Security code: 52732 ("JCPEC" on your phone handset) For global access numbers see http://www.intercall.com/oracle/access_numbers.htm Or +1 (408) 774-4073 WebEx Browse for the meeting from https://jcp.webex.com No registration required (enter your name and email address) Password: JCPEC Agenda JSR 355 (the EC merge) implementation report JSR 358 (JCP.next.3) status report 2.8 status update and community audit program Discussion/Q&A Note The call will be recorded and the recording published on jcp.org, so those who are unable to join in real-time will still be able to participate.

    Read the article

  • Question on methods in Object Oriented Programming

    - by mal
    I’m learning Java at the minute (first language), and as a project I’m looking at developing a simple puzzle game. My question relates to the methods within a class. I have my Block type class; it has its many attributes, set methods, get methods and just plain methods. There are quite a few. Then I have my main board class. At the moment it does most of the logic, positioning of sprites collision detection and then draws the sprites etc... As I am learning to program as much as I’m learning to program games I’m curious to know how much code is typically acceptable within a given method. Is there such thing as having too many methods? All my draw functionality happens in one method, should I break this into a few ‘sub’ methods? My thinking is if I find at a later stage that the for loop I’m using to cycle through the array of sprites searching for collisions in the spriteCollision() method is inefficient I code a new method and just replace the old method calls with the new one, leaving the old code intact. Is it bad practice to have a method that contains one if statement, and place the call for that method in the for loop? I’m very much in the early stages of coding/designing and I need all the help I can get! I find it a little intimidating when people are talking about throwing together a prototype in a day too! Can’t wait until I’m that good!

    Read the article

  • Korea&rsquo;s Anti Abortion / Pro Life Movement

    - by Randy Walker
    The South Korean government is in dire straits.  The national birth rate continues to decline and as the population grows older, there aren’t enough children being born to support the country long term.  The social issues of post Korean War are coming back to haunt the empowered nation.  Being torn apart by the Korean War (nicknamed the forgotten war in America) and with a nation facing starvation, South Korea allowed hundreds of thousands of their children to be adopted abroad.  This has created a problem of epidemic proportions, essentially devaluing life in Korea and child rearing. In an effort to encourage birth rates, the government encouraged their workers to go home early and procreate by turning off the lights in buildings.  Something unknown to me, was the illegalization of abortion except in special cases. According to the this article, http://joongangdaily.joins.com it’s working.  Abortions are down and women are being encouraged to give birth.  However the flip side is illegal risky abortions are on the rise, with potential back alley abortions looming.  But with a nation facing it’s potential implosion, it has to continue it’s efforts to encourage mothers to give birth. Many of the issues that America has faced is in stark contrast to South Korea.  Abortion has been a generally accepted procedure for some time.  If you’ll recall, I mentioned South Korea devalued their children.  But the nation’s problems lie so much deeper.  Being an Asian nation, saving “face” is an important aspect of life.  And being an unwed mother disgraces the family.  Living as a single mother in South Korea is a difficult life.  Most married mothers stay at home to take care of the children.  Being a shunned single mother that has a hard time getting a job (because you are a single mother) and affording child care isn’t like life in America. If we in the states suddenly faced a birthrate crisis, what would the U.S. government do?

    Read the article

  • Is my client correct that I cannot take a vacation as a subcontractor? [closed]

    - by Rae Ann
    I have 2 clients who I do ongoing work for as a subcontractor. Both are sporadic and part-time. Company A sent me to a 3 day certification course out of state. The following week I was scheduled for a 3 day vacation. I warned company B three weeks prior to these events. During my training that Company A was paying for, Company B asked me to leave the training to workon something for them that needed immediate attention. I declined. However I made arrangements to work on it in the evening and in the early mornings after they threatened to take the work and all subsequent work for this client to someone else. I lost all networking and fun from that trip... The following week I was in Florida and was again asked to do more work on the project after the feedback from the client. The integrated product personnel would not cooperate/return any of our calls so I did the best I could. I turned in the work, explained the issue and then was gone for 3 hours. When I returned, all my access to the project had been revoked and after a week of my calls and emails I found out they replaced me. I sent an invoice 3 weeks ago and they tell me they owe me nothing because I did not do the whole project and they cannot bill the client for what I did, because they are billing for the second contractor who started over. I was told that they realize I was on vacation but as a subcontractor I lose the ability to just disappear. I was gone 3 hours! Is this normal, correct, legal?? Not only did they ruin my class and my vacation, but now they expect me to not demand payment? They ended our relationship and I was in the middle of another project of theirs too. They told me to immediately cease all work for them. How do I get paid for the work I have yet to invoice this month?

    Read the article

  • C++ Succinctly now available!

    - by Michael B. McLaughlin
    Over the summer I worked with SyncFusion to create an eBook based off of my C# to C++ guide for their free Succinctly Series of eBooks. Today the result, C++ Succinctly, was published for download. It is a free (registration required; they make tools and libraries for .NET development so you might get an occasional email from them – I’ve been signed up for a few months and have had maybe 3 emails total so it’s not horrible super spam or anything ) and you can download it as a PDF or a Kindle .MOBI file (or both). I’m excited with how it turned out and enjoyed working with the people at SyncFusion. The book contains a total of 20 code samples, which you can download from BitBucket (there’s a link very early in the book). Almost all of the code is also inline in the book itself so that you don’t need to worry about flipping back and forth between your dev machine and your eReader (but if you want to try to understand a concept better, you can easily download the code, open it up in VS 2012, and play around with it to see what happens when you tinker with things). The code does require Visual Studio 2012 because of its expanded support for C++11 features and since I wrote all of the samples as Console programs for clarity and compactness, you will need a version that supports C++ desktop development (currently VS 2012 Pro, Premium, or Ultimate). Sometime this Fall, Microsoft will be releasing Visual Studio 2012 Express for Windows Desktop which should provide a free way to use the samples. That said, I tested all of the samples with MinGW and only the StorageDurationSample will not compile with it due to the thread-local storage code. If you comment that out then you can compile and run all the samples with MinGW (or using a recent version of GCC in a GNU/Linux environment, or any other C++ compiler that provides the same level of C++11 support that Visual Studio 2012 does). I hope it proves helpful to those of you who choose to check it out!

    Read the article

  • Randomly and uniquely iterating over a range

    - by Synetech
    Say you have a range of values (or anything else) and you want to iterate over the range and stop at some indeterminate point. Because the stopping value could be anywhere in the range, iterating sequentially is no good because it causes the early values to be accessed more often than later values (which is bad for things that wear out), and also because it reduces performance since it must traverse extra values. Randomly iterating is better because it will (on average) increase the hit-rate so that fewer values have to be accessed before finding the right one, and also distribute the accesses more evenly (again, on average). The problem is that the standard method of randomly jumping around will result in values being accessed multiple times, and has no automatic way of determining when each value has been checked and thus the whole range has been exhausted. One simplified and contrived solution could be to make a list of each value, pick one at random, then remove it. Each time through the loop, you pick one fromt he set of remaining items. Unfortunately this only works for small lists. As a (forced) example, say you are creating a game where the program tries to guess what number you picked and shows how many guess it took. The range is between 0-255 and instead of asking Is it 0? Is it 1? Is it 2?…, you have it guess randomly. You could create a list of 255 numbers, pick randomly and remove it. But what if the range was between 0-232? You can’t really create a 4-billion item list. I’ve seen a couple of implementations RNGs that are supposed to provide a uniform distribution, but none that area also supposed to be unique, i.e., no repeated values. So is there a practical way to randomly, and uniquely iterate over a range?

    Read the article

  • Back in Town and Ready for New Beginnings

    - by MOSSLover
    Originally posted on: http://geekswithblogs.net/MOSSLover/archive/2013/11/03/back-in-town-and-ready-for-new-beginnings.aspxI just took a super long trip that lasted from September 27th until today.  I flew into St. Louis and then rented a car and drove over 12,000 miles.  I just dropped the rental off last night.  I went to a ton of states, did a lot of really cool things, saw a lot of really cool people, and bought a ton of beer.  I made some decisions, but this post isn't really about my decisions.  It's more about the question that everyone has been asking, "Where am I going to work?".So here's the answer...BlueMetal Architects as a Senior SharePoint Engineer.  Here is their website: http://www.bluemetal.com/.  I basically start tomorrow.  I didn't want to post anything super early, because I didn't want to jinx things.  I am really excited.  Now that I'm back I'm hoping that things will start to turn around for me.  I look forward to the future.

    Read the article

  • Research useful for getting a job?

    - by Twirling Hearth
    I have recently started a BS program in Computer Science, in order to improve my employment prospects. I already possess a Master's in sociology (as part of a PhD program that I left early because I could not possibly sustain interest any longer). As such, I am trying to find my way in the grand world of computers. One option that has been suggested to me in the past is something to do with social networking. I already have a strong social sciences background, and my knowledge of programming is increasing as I go through my studies. I know there are some people in my city (Boston) who are doing research in that area, so it's possible I could get someone to take interest in me. For that matter, because research is something that I'm pretty good at, it's an option I'm considering, career-wise. I just have one question, is it a worthwhile use of my time career-wise? I have no burning intellectual passion for that topic, but I'm perfectly happy to do it, if it means $$$. Your thoughts are welcome.

    Read the article

  • Advanced TSQL training

    - by Dave Ballantyne
    Over the past few years, Ive had it on my to do list to write and deliver and full-scale SQLServer training course and not just an hour long bite size session at user groups and conferences.  To me, SQLServer development is not just knowing and remembering the syntax of commands.  Sometimes I semi-jest that i have “Written a merge statement without looking up the syntax”, but I know from my interactions on and off line that I am far from alone in this.  In any case we have an awesome tool in the internet which is great at looking things up. When developing SQL Server based solutions,  of more importance is knowing the internals of the engine.  SQL Server is a complex piece of software and we need to be able to understand to a fairly low level ( you can always dive deeper ) the choices that it makes and why it makes them in order to deliver performant, reliable, predictable and scalable systems to our customers and end-users. This is the view i shall be taking over two days in March (19th and 20th) in London and ,TBH, one I dont see taken often enough. Early bird discounts are available until 31Dec. Full details of the course and a high level view of the bullet points we shall be covering are available at the Technitrain site ( http://tinyurl.com/TSQLTraining )

    Read the article

  • Is there a product planning tool that has these specific features? [closed]

    - by acjohnson55
    I am working on a web startup in the early stages, and we are struggling a bit to manage the scope and scheduling of our product. We have loads of high-level features in the pipeline, but we need a good way of scheduling them for release iterations and breaking them into actual tasks that can be scheduled (that could be a separate tool, but integration would be preferred). I would say that our product can be pretty cleanly divided into "aspects", and we want to be able to separate features by the aspect to which they apply. Perhaps most importantly, it should be really simple to create and move features between target release points. We don't have physical space for a war room type setup, so whatever we settle upon should ideally have a cloud-type web interface. Right now, we're using Excel to make a grid of product aspects vs. target releases, and we store features at the intersections. But this is not providing a good way of indexing tasks to those features or being able to move them around. I would much rather have something that automates the grid overview. I'm less interested in something that helps with low-level scheduling than I am in something that is good at organizing the product plan at the long-term, high-level view. Is there a product planning tool out there that matches these specifications?

    Read the article

  • Procurement and E-Business Suite Product Analyzers .. Can you use this tool to resolve your SR?

    - by LindaJ-Oracle
    Procurement and E-Business Suite Product Analyzers (Doc ID 1545562.1). Analyzers are Query/Read only tools with easy to read html output. The tools are delivered by EBS Support via My Oracle Support documents ids for ease of use. The Analyzer scripts are meant to be part of your Production maintenance program by your Sysadmin, or to designated end users. The result set is an easy to read html output that provides recommendations, solutions and early warnings to of items that should be reviewed and correct. Each analyzer can be ran on demand or scheduled for repeatability and emailed to critical reviewers. There are several Analyzers available for E-Business Suite Applications Technology Group, Financials, and Manufacturing including some of the following topics.  Review them all at (Doc ID 1545562.1). Workflow Concurrent Processing Clone Log Parser Utility (Rapid Clone) Invoices, Payments, Accounting, Suppliers and EBTax Validate Data before Period Close EBTax Setup Payables Trial Balance Internet Expenses AutoInvoice Post-Process ASCP Performance PO Approval iProcurement Items For the Procurement specific Analyzers access them directly at: R12 IP Item Analyzer Diagnostic Script (Doc ID 1586248.1) R12: PO Approval Analyzer Diagnostic Script (Doc ID 1525670.1)

    Read the article

  • Learning curve for web development

    - by refro
    At the moment our team has a huge challenge, we're being asked to deliver a new GUI for an embedded controller. The deadline is very tight and is set on April 2013. Our team is very diverse, some people are on the level of functional programming (mostly C), others (including myself) have mastered object oriented programming (C++, C#). We built a prototype for Android, although it has its quirks, it is mostly just OO. For the future there is a wish to support multiple platforms (Windows, Android, iOS). In my opinion a HTML5 app with a native app shell is the way to go. When gathering more information on the frameworks to use etc., it became obvious to me a paradigm shift is needed. None of us have a web background so we need to learn from the ground up. The shift from functional to OO took us about 6 months to become productive (and some of the early subsystems were rewritten because they were a total mess). Can we expect the learning curve to be similar? Can this be pulled off with a web app? (My feeling says it will already be hard to pull off as a native app which is at the edge of our comfort zone).

    Read the article

  • Client/Server game even in solo: any big problem?

    - by Klaim
    I'm making a game which have strong basic design based on multiplayer but also should provide a really interesting and self-sufficient solo game. A bit like a real-time strategy game. The events and actions taken shouldn't be as massive and immediate as in a FPS, so you can also think the networking like for an RTS. It's a PC game, targetting Windows, MacOSX and Linux (Ubuntu & Fedora). It's programmed in C++, using a variety of open source libraries, so I have great (potential) control over the performances. So far I always considered that just making the game work with two applications, client & server, even in solo mode was ok. However, as I'm in the process of starting the network code I'm having doubts about if it's a good idea. I'm not a specialist so I might be missing something in my analysis. I see these pros and cons: Pros: The game works only one way so if I fix a bug it should apply on all game modes, whatever the distance with the server is; Basic networking issues would be detected early, including behaviour with the protection softwares (firewall) installed (i am not specialist so this might be wrong); Cons: I suppose that even if it should be really fast enough, networking client and server on the same computer would still be slower than no networking and message passing in (one) process memory. Maybe debugging would be more difficult? I don't have experience in this case but so far I assume that debugging with Visual Studio allows me to debug multiple process so it shouldn't be really different. Also, remote debugging. My question is: is there a big disadvantage that I missed? Or maybe there are advantages that I missed and that should encourage me to just continue with only client-server game sessions?

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >