Search Results

Search found 8219 results on 329 pages for 'less'.

Page 149/329 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • Impact on SEO of adding categories/tags in front of the HTML title [closed]

    - by Mad Scientist
    Possible Duplicate: Does the order of keywords matter in a page title? All StackExchange sites add the most-used tag of a question in front of the HTML title for SEO purposes. On Stackoverflow for example this is usually the programming language, so you end up with a title like python - How do I do X? This has obviously an enourmous benefit on SEO as the programming language is an extremely important keyword that is very often omitted from the title. Now, my question is for the cases where the tag isn't an important keyword missing from the title, but just a category. So on Biology.SE for example one would have questions like biochemistry - How does protein X interact with Y? or on Skeptics medical science - Do vaccines cause autism? Those tags are usually not part of the search terms, they serve to categorize the content but users don't use those tags in their searches. How harmful is adding tags that are not used in searches in terms of SEO? Is there any hard data on the impact this practise might have on SEO? The negative aspects I can imagine, but have no data to show that it is actually a problem are: I heard that search engines dislike keyword stuffing and this might trigger some defense mechanisms against that It's a practise associated with less reputable sites, a keyword in front that doesn't fit the actual title well might look suspicious to some users. It wastes precious space in the title shown in search results.

    Read the article

  • Oracle Tutor - Is Anyone Reading Your Documentation?

    - by mary.keane
    If you are responsible for documenting your business practices, wouldn't it be nice to know if anyone is using the documentation? If the employees find it useful? You might want to consider surveying the users of the documentation on a regular basis. There are a number of free survey tools online (search for "free survey tools"), and you can have a survey ready in a matter of minutes. It's as simple as gathering a list of questions and a list of email addresses. For the questions, here are some suggestions. How often do you access the policy and procedure site? How useful is the site? How easy is it to navigate the site? How often are your questions answered on the site? What suggestions do you have to make the site more useful? You may want to consider just asking a few questions each month so that employees can complete the survey in less than 5 minutes (you'll get more responses). Make sure you have several comment boxes in the survey so that the employees can give suggestions. As the users of your documentation, the employees may have some terrific ideas that will enhance the usability of your policy and procedure site. It would be great to hear your suggestions for how to survey the users of your documentation. Mary R. Keane Senior Development Manager, Oracle BPM and Tutor

    Read the article

  • Should we test all our methods?

    - by Zenzen
    So today I had a talk with my teammate about unit testing. The whole thing started when he asked me "hey, where are the tests for that class, I see only one?". The whole class was a manager (or a service if you prefer to call it like that) and almost all the methods were simply delegating stuff to a DAO so it was similar to: SomeClass getSomething(parameters) { return myDao.findSomethingBySomething(parameters); } A kind of boilerplate with no logic (or at least I do not consider such simple delegation as logic) but a useful boilerplate in most cases (layer separation etc.). And we had a rather lengthy discussion whether or not I should unit test it (I think that it is worth mentioning that I did fully unit test the DAO). His main arguments being that it was not TDD (obviously) and that someone might want to see the test to check what this method does (I do not know how it could be more obvious) or that in the future someone might want to change the implementation and add new (or more like "any") logic to it (in which case I guess someone should simply test that logic). This made me think, though. Should we strive for the highest test coverage %? Or is it simply an art for art's sake then? I simply do not see any reason behind testing things like: getters and setters (unless they actually have some logic in them) "boilerplate" code Obviously a test for such a method (with mocks) would take me less than a minute but I guess that is still time wasted and a millisecond longer for every CI. Are there any rational/not "flammable" reasons to why one should test every single (or as many as he can) line of code?

    Read the article

  • Microsoft releases Visual Studio 2010 SP1

    - by brian_ritchie
    Microsoft has been beta testing SP1 since December of last year.  Today, it was released to MSDN subscribers and will be available for public download on March 10, 2011.The service pack includes a slew of fixes, and a number of new features: Silverlight 4 supportBasic Unit Testing support for the .NET Framework 3.5Performance Wizard for SilverlightIntelliTrace for 64-bit and SharePointIIS Express supportSQL CE 4 supportRazor supportHTML5 and CSS3 support (IntelliSense and validation)WCF RIA Services V1 SP1 includedVisual Basic Runtime embeddingALM Improvements Of all the improvements, IIS Express probably has the largest impact on web developer productivity.  According to Scott Gu, it provides the following:It’s lightweight and easy to install (less than 10Mb download and a super quick install)It does not require an administrator account to run/debug applications from Visual Studio It enables a full web-server feature set – including SSL, URL Rewrite, Media Support, and all other IIS 7.x modules It supports and enables the same extensibility model and web.config file settings that IIS 7.x support It can be installed side-by-side with the full IIS web server as well as the ASP.NET Development Server (they do not conflict at all) It works on Windows XP and higher operating systems – giving you a full IIS 7.x developer feature-set on all OS platforms IIS Express (like the ASP.NET Development Server) can be quickly launched to run a site from a directory on disk.  It does not require any registration/configuration steps. This makes it really easy to launch and run for development scenarios.Good stuff indeed.  This will make our lives much easier.  Thanks Microsoft...we're feeling the love!  

    Read the article

  • What is the value to checking in broken unit tests?

    - by Adam W.
    While there are ways of keeping unit tests from being executed, what is the value of checking in broken unit tests? I will use a simple example. Case sensitivity. The current code is Case Sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests has found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • Disallow robots.txt from being accessed in a browser but still accessible by spiders?

    - by Michael Irigoyen
    We make use of the robots.txt file to prevent Google (and other search spiders) from crawling certain pages/directories in our domain. Some of these directories/files are secret, meaning they aren't linked (except perhaps on other pages encompassed by the robots.txt file). Some of these directories/files aren't secret, we just don't want them indexed. If somebody browses directly to www.mydomain.com/robots.txt, they can see the contents of the robots.txt file. From a security standpoint, this is not something we want publicly available to anybody. Any directories that contain secure information are set behind authentication, but we still don't want them to be discoverable unless the user specifically knows about them. Is there a way to provide a robots.txt file but to have it's presence masked by John Doe accessing it from his browser? Perhaps by using PHP to generate the document based on certain criteria? Perhaps something I'm not thinking of? We'd prefer a way to centrally do it (meaning a <meta> tag solution is less than ideal).

    Read the article

  • GeForce and Radeon: what is present condition of opensource and proprietary drivers?

    - by Septagram
    So, it'm about to buy a fresh videocard. Since I do most of my stuff on Linux, I wonder how well will either videocard perform. I recently had a good experience with GeForce 6600 with proprietary drivers and a less than satisfactory experience with Radeon 9000 a while ago. From my experience, proprietary drivers for GeForce used to work very well, while proprietary drivers for Radeon failed miserably. And opensource drivers were sloooow. A few months ago I found out that ATI opened their specifications, and a work on fully featured opensource driver is in progress. I prefer to use free software whenever possible, with the exception of games, so, if that driver is fast enough, feature-rich enough and reliable enough I'd very much like to try it out. I wish I could say that if I can just to basic things, like watch video, heavily use compiz and work with simple applications, this may be enough. I do most of my gaming under Windows anyway. However, there is a good chance I'll go into indie game development in a few months fulltime, so it should also be able to run not-so-very-demanding games (say Nexuiz). But if it isn't, I'd like to know, what to expect from proprietary drivers. Do recent proprietary drivers from NVIDIA and ATI work well? Are ATI drivers just as easy to install on Ubuntu as are NVIDIA drivers?

    Read the article

  • Hash Algorithm Randomness Visualization

    - by clstroud
    I'm curious if anyone here has any idea how the images were generated as shown in this response: Which hashing algorithm is best for uniqueness and speed? Ian posted a very well-received response but I can't seem to understand how he went about making the images. I hate to make a new question dedicated to this, but I can't find any means to ask him more directly. On the other hand, perhaps someone has an alternative perspective. The best I can personally come up with would be to have it almost like a bar graph, which would illustrate how evenly the buckets of the hash table are being generated. I have a working Cocoa program that does this, but it can't generate anything like what he showed there. So the question is two fold I suppose: A) How does one truly interpret the data he shows? Is it more than "less whitespace = better"? B) How does one generate such an image based on some set of inputs, a hash, and an index? Perhaps I'm misunderstanding entirely, but I really would like to know more about this particular visualization technique. Or maybe I'm mis-applying this to hash tables rather than just hashes in general, but in that case I don't know how it would be "bounded" for the image.

    Read the article

  • Lost WiFi after 12.10 upgrade

    - by Steven Guillory
    I received my new Dell Vostro 2420 last week, and just got around to upgrading from 11.10 to 12.10. Unfortunately, like many others (after researching the issue), I no longer have WiFi. I have tried every sudo command given that worked for others, and still can't get my wireless to function. I am new to Linux, so any and all help is appreciated. Thanks in advance! Edit: I can connect via ethernet, just not via wifi. As a matter of fact, when I use Fn + F2 to turn on wifi, only my bluetooth comes on. lspci 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.3 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 4 (rev c4) 00:1c.5 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 6 (rev c4) 00:1d.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA Controller [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 07:00.0 Network controller: Broadcom Corporation Device 4365 (rev 01) 09:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 07) This is what I am getting... dpkg: error: --install needs at least one package archive file argument Type dpkg --help for help about installing and deinstalling packages [*]; Use dselect or aptitude for user-friendly package management; Type dpkg -Dhelp for a list of dpkg debug flag values; Type dpkg --force-help for a list of forcing options; Type dpkg-deb --help for help about manipulating *.deb files; Options marked [*] produce a lot of output - pipe it through less or more !

    Read the article

  • Web migration of a VB6 system with VWG

    - by Webgui
    Brinks Bolivia eSAC System (Customer Service) allows to register all different kinds of contacts for a customer; addition to maintaining an updated status of each service or customer request, to have accurate information and perform the appropriate procedures for all applications. The system was originally developed in VB6 and since web access was essential it was offered via Citrix. Since the application's performance was a critical issue as well as the need to offer the system without specific installations the company looked for a solution that would solve those drawbacks of using Citrix. Searching for a solution that would allow it to offer the eSAC system over the web without the need for specific client installations and provide sufficient performance levels even when there is limited bandwidth lead Brinks to a decision to migrate their VB6 Customer Service system to to Visual WebGui. "Developing on Visual WebGui we were able to migrate the system to web environment and even add new features in less time which allows us to offer it over a standard web browser with better performance and no installations as was required with Citrix," concluded Alexander Cuellar. The full article and screenshots of the system are available here.

    Read the article

  • How to handle fine grained field-based ACL permissions in a RESTful service?

    - by Jason McClellan
    I've been trying to design a RESTful API and have had most of my questions answered, but there is one aspect of permissions that I'm struggling with. Different roles may have different permissions and different representations of a resource. For example, an Admin or the user himself may see more fields in his own User representation vs another less-privileged user. This is achieved simply by changing the representation on the backend, ie: deciding whether or not to include those fields. Additionally, some actions may be taken on a resource by some users and not by others. This is achieved by deciding whether or not to include those action items as links, eg: edit and delete links. A user who does not have edit permissions will not have an edit link. That covers nearly all of my permission use cases, but there is one that I've not quite figured out. There are some scenarios whereby for a given representation of an object, all fields are visible for two or more roles, but only a subset of those roles my edit certain fields. An example: { "person": { "id": 1, "name": "Bob", "age": 25, "occupation": "software developer", "phone": "555-555-5555", "description": "Could use some sunlight.." } } Given 3 users: an Admin, a regular User, and Bob himself (also a regular User), I need to be able to convey to the front end that: Admins may edit all fields, Bob himself may edit all fields, but a regular User, while they can view all fields, can only edit the description field. I certainly don't want the client to have to make the determination (or even, for that matter, to have any notion of the roles involved) but I do need a way for the backend to convey to the client which fields are editable. I can't simply use a combination of representation (the fields returned for viewing) and links (whether or not an edit link is availble) in this scenario since it's more finely grained. Has anyone solved this elegantly without adding the logic directly to the client?

    Read the article

  • How is determined an impact of a requirement change on the existing code?

    - by MainMa
    Hi, How companies working on large projects evaluate an impact of a single modification on an existing code? Since my question is probably not very clear, here's an example: Let's take a sample business application which deals with tasks. In the database, each task has a state, 0 being "Pending", ... 5 - "Finished". A new requirement adds a new state, between 2nd and 3rd one. It means that: A constraint on the values 1 - 5 in the database must be changed, Business layer and code contracts must be changed to add a new state, Data access layer must be changed to take in account that, for example the state StateReady is now 6 instead of 5, etc. The application must implement a new state visually, add new controls for it, new localized strings for tool-tips, etc. When an application is written recently by one developer, it's more or less easy to predict every change to do. On the other hand, when an application was written for years by many people, no single person can anticipate every change immediately, without any investigation. So since this situation (such changes in requirements) is very frequent, I imagine there are already some clever techniques and ways to predict the impact. Is there any? Do you know any books which deal about this subject? Note: my question is not related to How do you deal with changing requirements? question. In fact, I'm not interested in evaluating the cost of a change, but rather the way to predict the parts of an application which will be concerned by the change. What will be those changes and how difficult they are really doesn't matter in my question.

    Read the article

  • Deciding on a company-wide javascript strategy [on hold]

    - by drogon
    Our company is moving most of its software from thick-client winforms apps to web apps. We are using asp.net mvc on the server side. Most of the developers are brand new to the web and need to become efficient and knowledgeable at writing client-side web code (javascript). We are deciding on a number of things and would appreciate feedback on the following: Angular.js or Backbone.js? Backbone (w/ Underscore) is certainly more light weight, but requires more custom development. Angular seems to be a full-fledged framework, but would require everyone to embrace it and probably a longer learning curve(??). (Note: I know nothing about Angular at this point) Require.js or script includes w/ MVC bundleconfig? Require.js makes development "feel like" c# (importing namespaces). But, integrating the build/minification process can be a pain (especially the configuration). Bundling via mvc requires developers to worry more about which scripts to include but has less overall development friction. Typescript vs Javascript Regardless of frameworks, our developers are going to need to learn the basics. Typescript is more like c# and MAY be easier for c# developers to understand. However, learning TypeScript before javascript may hinder their mastery of javascript at the expense of efficiency.

    Read the article

  • How do I parse a header with two different version [ID3] avoiding code duplication?

    - by user66141
    I really hope you can give me some interesting viewpoints for my situation, because I am not satisfied with my current approach. I am writing an MP3 parser, starting with an ID3v2 parser. Right now I`m working on the extended header parsing, my issue is that the optional header is defined differently in version 2.3 and 2.4 of the tag. The 2.3 version optional header is defined as follows: struct ID3_3_EXTENDED_HEADER{ DWORD dwExtHeaderSize; //Extended header size (either 6 or 8 bytes , excluded) WORD wExtFlags; //Extended header flags DWORD dwSizeOfPadding; //Size of padding (size of the tag excluding the frames and headers) }; While the 2.4 version is defined : struct ID3_4_EXTENDED_HEADER{ DWORD dwExtHeaderSize; //Extended header size (synchsafe int) BYTE bNumberOfFlagBytes; //Number of flag bytes BYTE bFlags; //Flags }; How could I parse the header while minimizing code duplication? Using two different functions to parse each version sounds less great, using a single function with a different flow for each occasion is similar, any good practices for this kind of issues ? Any tips for avoiding code duplication? Any help would be appreciated.

    Read the article

  • Confusion with floats converted into ints during collision detection

    - by TheBroodian
    So in designing a 2D platformer, I decided that I should be using a Vector2 to track the world location of my world objects to retain some sub-pixel precision for slow-moving objects and other such subtle nuances, yet representing their bodies with Rectangles, because as far as collision detection and resolution is concerned, I don't need sub-pixel precision. I thought that the following line of thought would work smoothly... Vector2 wrldLocation; Point WorldLocation; Rectangle collisionRectangle; public void Update(GameTime gameTime) { Vector2 moveAmount = velocity * (float)gameTime.ElapsedGameTime.TotalSeconds wrldLocation += moveAmount; WorldLocation = new Point((int)wrldLocation.X, (int)wrldLocation.Y); collisionRectangle = new Rectangle(WorldLocation.X, WorldLocation.Y, genericWidth, genericHeight); } and I guess in theory it sort of works, until I try to use it in conjunction with my collision detection, which works by using Rectangle.Offset() to project where collisionRectangle would supposedly end up after applying moveAmount to it, and if a collision is found, finding the intersection and subtracting the difference between the two intersecting sides to the given moveAmount, which would theoretically give a corrected moveAmount to apply to the object's world location that would prevent it from passing through walls and such. The issue here is that Rectangle.Offset() only accepts ints, and so I'm not really receiving an accurate adjustment to moveAmount for a Vector2. If I leave out wrldLocation from my previous example, and just use WorldLocation to keep track of my object's location, everything works smoothly, but then obviously if my object is being given velocities less than 1 pixel per update, then the velocity value may as well be 0, which I feel further down the line I may regret. Does anybody have any suggestions about how I might go about resolving this?

    Read the article

  • Offshoring: does it ever work?

    - by DanSingerman
    I know there has been a fair amount of discussion on here about outsourcing/offshoring, and the general opinion seems to be that at best it is difficult, and at worst it fails. I have direct experience of offshoring myself; a previous company where I was a dev manager wanted to send some development offshore, and we ran a pilot scheme to see how well it would work. Of course it was a complete failure, although it is not completely clear to me whether this was down to the offshore devs being less talented, the process, or other factors (no doubt it was really a combination). I can see as a business how offshoring looks attractive (much lower day rate), but as far as I can see, the only way it could possibly work is if you do exceptionally detailed design up front, with incredibly detailed specifications; and by the time you have invested in producing that, you have probably spent as nearly as much as if you had written the actual code locally (which I think is an instance of No Silver Bullet) So, what I want to know is, does anyone here have any experience of offshoring actually working ever? Especially if there are any success stories of it working in a semi-agile way? I know there are developers here from all over the World; has anyone worked on an offshore project they consider successful?

    Read the article

  • Leaving the field of programming. What are the options?

    - by hal10001
    A lot of graduates ask about getting into this field, but I know there are times when I (as well as many others) think about leaving, too. My issue is that I love solving problems and the act of creating something that people enjoy using, and that is what keeps bringing me back. Lately, though, programming has become less of the act of creation and about solving problems, and has become more about being "a monkey at a keyboard". Can you offer any advice with regard to: What fields would offer equivalent problem-solving challenges consistently? How you would go about doing the research, or considering the career change? Basically anything else you think would be helpful in this situation. EDIT: I guess I should clarify and say that I've been in the field about 10 years, and I have had my fair share of working environments. The place where I am at now, and even the previous two jobs, the people I worked with have been great. I've been very lucky in that respect. I'm beginning to wonder if the next step for me has little to do with actual programming and more to do with business analysis or strategic consulting. I would hate to get too much onto the business side of things though, as I like being around tech folks more.

    Read the article

  • How can I get cross-browser consistent behavior for TR heights within a table with a set height? [migrated]

    - by Dan
    I have an arbitrary number of tables with an arbitrary number of rows in each, and all tables are the same height. My initial approach was to just set the overall height of the table and hope the rows were smart enough to distribute themselves appropriately. That's not the case. I have 4 different behaviors going on with 4 browsers, but I need them to all render at the very least in a similar way. Safari & Chrome (WebKit): All rows are equal height, creating scroll bars as needed and fitting within table height. Firefox: All rows are the height necessary to fit their content, with the remaining rows overflowing out of the table. Additionally, If the content of the rows does not take up all of the height, only the part of the table with content in it takes the background (though it seems, through use of Firebug, that the actual table [and TR] extend to the bottom of the proper table height). IE: All rows are the height necessary to fit their content, with the remaining rows overflowing out of the table. Obviously this only includes one version of each browser and additional variation would likely appear with more being tested. Ideally, a solution where the browser renders TRs with less content smaller than those with larger content, while still using scrolling within the variable height TRs when the overall height of the table is not enough would be optimum. I could potentially see a solution to achieve that with JS, but can it be done with CSS? Or, if not, can the behavior that WebKit displays be made to work across the browsers? Thanks! PS: Example can be found here.

    Read the article

  • How to start and maintain an after-work project

    - by Sam
    I work as a full time developer. My workplace, however, is very limiting in the technologies and programming languages I can use. All of the work is done in C++. It is clear that C++ is rapidly losing (or maybe already lost) its leading position. (please don't flame me, I have years and years of C++ experience, and I love this language, I am merely stating a fact). I have a few ideas for java/android projects as well as a project I would like to implement in C#. I see this as a way for me to stay current with the job market's trends and I hope that it will help me find my next job in a more up to date area. So here's the problem, my normal workday is 10-11 hours, after finishing with the kids and house chores I get about 1-2.5 hours before I am too tired to think much less code. at that point I am going to bed frustrated, disappointed with myself for not being able to stick with my plans, and then I wake up the next morning to do it all again. I have a few hours more during the weekends but clearly I would need to do something different if I want to reach any of my goals. Is there any way for me to make better use of the time I have? Did any of you guys have a similar problem, and had succefully resolved it?

    Read the article

  • Object construction design

    - by James
    I recently started to use c# to interface with a database, and there was one part of the process that appeared odd to me. When creating a SqlCommand, the method I was lead to took the form: SqlCommand myCommand = new SqlCommand("Command String", myConnection); Coming from a Java background, I was expecting something more similar to SqlCommand myCommand = myConnection.createCommand("Command String"); I am asking, in terms of design, what is the difference between the two? The phrase "single responsibility" has been used to suggest that a connection should not be responsible for creating SqlCommands, but I would also say that, in my mind, the difference between the two is partly a mental one of the difference between a connection executing a command and a command acting on a connection, the latter of which seems less like what I have been lead to believe OOP should be. There is also a part of me wondering if the two should be completely separate, and should only come together in some sort of connection.execute(command) method. Can anyone help clear up these differences? Are any of these methods "more correct" than the others from an OO point of view? (P.S. the fact that c# is used is completely irrelevant. It just highlighted to me that different approaches were used)

    Read the article

  • How do you price your work?

    - by Dr.Kameleon
    Well, let me explain : This has really been an issue for me, for such a long time. And what is worse - since coding is something I simply ADORE (I would definitely do it, even if there was no payment involved whatsoever..) - is that I always end up feeling somewhat awkward... Anyway... So, here's the deal : You start working on a project, you may have something in your mind, and even if you're lucky enough and the client needs no "cost estimates" beforehand, sooner or later you'll face the ultimate dilemma of pricing your own work. So, how do YOU do it? By estimating the time you put into it? (obviously, this is not exact, 'coz perhaps a more capable coder will need much less time for the very same thing than a not-so-competent coder + even the very same coder may not "perform" equally at all times) By the Lines of code you've written? (obviously, this is not a measure either : a 10-line script that does exactly the same with a 1000-line script is, at least for me, "better") By taking into account the level of complexity of the project and, perhaps, how specialised the subject is? By taking into account other factors? (e.g. the value of the project for your customer)

    Read the article

  • screen does not wake up after suspend/ brightness will not adjust

    - by Nathan
    My computer is one of these: http://www.newegg.com/Product/Product.aspx?Item=N82E16834230467 Asus Zenbook UX32A-DB51 Just bought it yesterday. Set to "suspend" when I close the lid, but when I open it up again I just get a black screen. Pressing the power button doesn't work, not does clicking the mouse or any key combination. The same is true if I leave the computer until it suspends. Black screen, no response, have to reboot. I also can't seem to adjust the brightness. I put "brightness up" and "brightness down" function keys in, and when I push them it shows the brightness level as going up and down on the meter, but the screen does not actually become more or less bright. I am totally new to all of this. Please tell me what additional information you need to help me and where I can find it. Thanks! Edit: Tried the following: sudo gedit /etc/default/grub Change the line GRUB_CMDLINE_LINUX="" into GRUB_CMDLINE_LINUX="acpi_osi=Linux" sudo update-grub Restart your linux And "acpi_backlight=vendor" both to no avail.

    Read the article

  • Is the Alternate Ubuntu installer still required for LVM or Software RAID setup?

    - by jimp
    Over the past 5 years, I have been setting up Ubuntu servers using the Alternate installer. I need to provision a new server today, and I'm curious if the Alternate CD is still the only way to setup LVM/RAID at installation time. I'm my limited experience with Red Hat Enterprise Linux, I noticed it's single installer configures LVM automatically. Has Ubuntu's installer, at least the standard "Server" installer, added support for LVM/RAID, or is the Alternate installer still required for that kind of server setup? http://mirror.anl.gov/pub/ubuntu-iso/DVDs/ubuntu/12.04.1/release/ Alternate install CD The alternate install CD allows you to perform certain specialist installations of Ubuntu. It provides for the following situations: setting up automated deployments; upgrading from older installations without network access; LVM and/or RAID partitioning; installs on systems with less than about 384MiB of RAM (although note that low-memory systems may not be able to run a full desktop environment reasonably). LVM has always been fundamental for our server needs, so I'm surprised if it is still not considered a server-worthy feature.

    Read the article

  • Student Life in Nigeria

    - by FelixWehmeyer
    They say that the Nigerian way of life involves being able and ready to succeed in life no matter the obstacles. My name is Olawale Obasola, graduated from Covenant University ’07 and presently working as a graduate consultant in Oracle. I will give you an insight into the student life in Nigeria. Nowadays, being a graduate in Nigeria is not the easiest thing; the life after graduation is much harder than one would normally imagine. I guess it isn’t helped with the Economical uproar of the last few years but it’s as grueling as it can get.  Also the Upper vs Lower class phenomena makes the journey a little too much hurried as it is easier to lose one’s way. Joining Oracle is a dream come-true as it further enhances my stance of taking Technology to the core of Nigeria. My initial perception about joining Oracle is that am going to be monitored every single second and my life would practically revolve around work but ever since I stepped in here, it has been an amazing experience. As much as the work can be stressful at times, it’s also the best place I have ever worked in and the atmosphere is great. My team is the best Pre-Sales team in the region and I gain invaluable knowledge every minute, we have a bond and understanding that helps propel each other to achieve more. The Life of a Nigerian graduate isn’t a bed of roses but I have a strong will and personality to emerge from any hardship to succeed. We show the true strength and spirit of Africa, we never give up and would never settle for anything less than the best.

    Read the article

  • Where is Oracle Utilities Application Framework V3?

    - by Anthony Shorten
    You may of noticed that the latest version of the Oracle Utilities Application Framework is V4.0.1. The last release of the Framework was V2.2. So what happened to V3? The short answer is that there is no V3 of the framework. The long answer is that the Oracle Utilities Application Framework has long been associated with Oracle Utilities Customer Care And Billing and Oracle Enterprise Taxation Management only. As more and more of the Oracle Tax And Utilities products are migrated onto the framework the association betweent eh original products on the framework is less appropriate. Therefore it was decided to pick a version number to emphasize the decouplinf of the releases of the Framework with any particular product. To illustrate this, the Oracle Mobile Workforce Management (MWM) V2.0.0 product uses Oracle Utilities Applicaton Framework V4.0.1. If we used the old numberings schema then MWM would be V4.0.1 which makes no sense, given the last release of MWM was V1.x The framework has its own development team and product management. It basicaly has its own schedule (though it is influenced by the products that use it still - which makes sense). So that s the reasoning around the version numbering change for the framework.

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >