Search Results

Search found 11597 results on 464 pages for 'complete graph'.

Page 90/464 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Inheriting projects - General Rules? [closed]

    - by pspahn
    Possible Duplicate: When is a BIG Rewrite the answer? Software rewriting alternatives Are there any actual case studies on rewrites of software success/failure rates? When should you rewrite? We're not a software company. Is a complete re-write still a bad idea? Have you ever been involved in a BIG Rewrite? This is an area of discussion I have long been curious about, but overall, I generally lack the experience to give myself an answer that I would fully trust. We've all been there, a new client shows up with a half-complete project they are looking to finish and launch. For whatever reason, they fired their previous developer, and it's now up to you to save the day. I am just finishing up a code review for a new client, and in my estimation is would be better to scrap what the previous developers built since and start from scratch. There's a ton of reasons why I am leaning toward this way, but it still makes me nervous since the client isn't going to want to hear "those last guys built you a big turd, and I can either polish it, or throw it in the trash". What are your general rules for accepting these projects? How do you determine whether it will be better to start from scratch or continue with the existing code base? What other extra steps might you take to help control client expectations, since the previous developer may have inflated those expectations beyond a reasonable level? Any other general advice?

    Read the article

  • SQL – Crossword Puzzle Based on Course Building Successful High Traffic Profitable Blog

    - by Pinal Dave
    Do you like Crossword Puzzles? I personally love it. Everytime I open the newspaper, I try to resolve at least one crossword or sudoku. It is just fun to tease a brain little and stretch its limits. Regular readers of the blogs are aware that I have recently published two courses on how to build successful high traffic profitable blog. Here are the links to watch both the courses: Course 1, Course 2. Do watch them in order as both the courses have unique content, which can help you build a better blog. On my birthday July 30th, there was an interesting blog post posted on Pluralsight blog. It was a crossword build from my two courses. I encourage you try to solve the crossword which I have built. Giveaway: There is a cool gift for the winner – it is melting clock. Do not confuse this as a dummy or not working clock. This looks like melting but it always shows accurate time and it is perfectly balanced to hang off of any flat surface. How to Participate: Well, it is very simple, you just have to complete the crossword and send it to me at pinal at sqlauthority.com with all valid answers. The deadline is that you must send it before Monday August 5, 2013 or before the valid answer keys are posted on Pluralsight blog. Hints: Though the crossword is very easy and intuitive, if you ever get stuck anywhere here are two hints: Hint 1, Hint 2. Login to Pluralsight courses and watch both the courses. Watching the course will not only help you to easily complete crossword but there are hidden gems and secrets to build a high traffic profitable blog. Here is the link to download the crossword: Download Crossword. Alternatively you can download the image displayed below and print it as well.   Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Blogging

    Read the article

  • Design Application to "Actively" Invite Users (pretend they have privileges)

    - by user3086451
    I am designing an application where users message one another privately, and may send messages to any Entity in the database (an Entity may not have a user account yet, it is a professional database). I am not sure how to best design the database and the API to allow messaging unregistered users. The application should remain secure, and data only accessed by those with correct permissions. Messages sent to persons without user accounts serve as an invitation. The invited person should be able to view the message, act on it, and complete the user registration upon receiving an InviteMessage. In simple terms, I have: User misc user fields (email, pw, dateJoined) Entity (large professional dataset): personalDetails... user->User (may be null) UserMessage: sender->User recipient->User dateCreated messageContent, other fields..... InviteMessage: sender->User recipient->Entity expiringUrl inviteeEmail inviteePhone I plan to alert the user when selecting a recipient that is not registered yet, and inform that he may send the message as an invitation by providing email, phone where we can send the invitation. Invitations will have a unique, one-time-use URL, e.g. uuid.uuid4(). When accessed, the invitee will see the InviteMessage and details about completing his/her registration profile. When registration is complete, InviteMessage details to a new instance of UserMessage (to not lose their data), and assign it to the newly created User. The ability to interact with and invite persons who do not yet have accounts is a key feature of the application, and it seems better to separate the invitation from the private, app messages (easier to keep functionality separate, better if data model changes). Is this a reasonable, good design? If not, what would you suggest? Do you have any improvements? Am I correct to choose to create a separate endpoint for creating invitations via the API?

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • How to decide on a price for the project as a freelancer

    - by Shekhar_Pro
    I have seen similar question on this SE site but none comes close to a sure shot answer and many are rather subjective. So i am taking a website as an example to be more objective for you to decide its development price i should quote for the complete work.I would like to have specific figures. In past I have developed many projects for my classmates (Computer science and few .net) when i was in college and there i just arbitrarily quoted the price i will take depending on my mood and customer's ability to pay.. usually ranging from Rs.500 (about $10 USD) to Rs. 1500 (about $30 USD). I have also developed few websites but that was open-source and free. But this time impressed by my work i have got a client that wants to get a website developed similar to this: [ http://www.jeetle.in/ ]. So taking this website as an example tell me how much should i charge for complete work from designing to payment gateway implementation (Excluding the charge the payment gateway provider will take). Few information you might like to consider. I am the only developer on this project if that makes any difference. And i would be using ASP.Net and MSSQL Express for server side processing and jQuery on client. Time period for development offered is about 4 to 6 Weeks. Its like i know my work but not how much I'm worth

    Read the article

  • Efficient use of Bundling

    - by ACShorten
    One of the discussions I am having with customers and consulting people is about the use of Bundling and its appropriate use. We introduced Bundling post release in the V2.2 code line to allow partners and consultants to build solutions using the Configuration Tools objects such as UI Maps, Service Scripts, Business Objects, Business Services etc and then export and migrate them as solutions. Whilst that was the original intent I have found a few teams using the facility for other data and then complaining about the efficiency or relevance of the tool. Here are a number of guidelines to help optimize the use of Bundling for your implementation: Not all objects can be bundled. Only specific objects in the product can be bundled. These are targetted at Configuration Tools objects and a select group of other objects that are required for these objects. Maintenance Objects with the option "Eligble for Bundling" set to Y (and also contains a Bundling Add BO). Add objects to the Bundle as you complete them - Bundling can have issues with sequencing objects. The best way of combating this is to add objects to the bundle as you complete them. This will help with making sure you sequence the loading of the objects as you are building them in the correct order. Remember Bundling was designed for developers and partners to deliver solutions. If you leave adding objects to a Bundle using the Bundle Export zones then you will have less control of what sequence they are applied and this can cause timing issues. Bundling takes the latest revision  - If you combine Bundling with Revision Control then the Bundling will take the latest release of the object at the time of the export operation. Bundling and Version Control products - If you use a version control tool to control your java code then you can also check in the Bundle to associate a release between code and a bundle. Bundling is quite a powerful feature of the Oracle Utilities Application Framework that allows sales, partners, consultants and customers to package and import their Configuration Tools based solutions.

    Read the article

  • RAM caching causes severe performance drops

    - by B T
    I have read plenty of threads on memory caching and the standard response of "large cache is good, it shouldn't effect performance", "the kernel knows best". I have recently upgraded from 12.04 to 12.10 and changed from VirtualBox to VMware Workstation and the performance differences are severe (I suspect it is because of the latter). When I am running my virtual machine the system load monitor graph shows less than 50% memory usage generally. System load indicator is showing me that the rest of my RAM is used in the cache all the time. Plain and simple this is the comparison: BEFORE Cache was very sparingly used, pretty much none of my memory usage was the cache Swappiness was 0 (caused my memory to be used first, then swap only if needed) Performance was quite good and logical RAM was used fully first, caching was minimal. I could run enough software to utilize my full 4GB of RAM without any performance degradation whatsoever Swap space was then used as needed which was obviously slower (I am on a HDD) but was still usable when the current program was loaded into memory AFTER Cache is used to fill the full 4GB as soon as my virtual machine is run Swappiness is 0 (same behaviour as before but cache uses full memory straight away) Performance is terrible and unusable while running Ubuntu software Basic things like changing windows takes 2 minutes + Changing screens happens frame by frame over sometimes up to 5 minutes Cannot run an IDE and VM like I could with ease before So basically, any suggestions on how to take my performance back to how it was before while keeping my current setup? My suspicion is VMWare is the problem, but how do I see what is tied to the use of the cache? Surely there is a way to control this behaviour in software as polished as VMware? Thanks EDIT: Could also be important to note that the behaviour differs depending on whether VMware is open or closed. If VMware is open, then the ram will lock at like 50% and 50% cache and go into the complete lock up mentioned above. Contrastingly, if VMware is closed (after being open), then the RAM will continue to rise as it needs / cache will stay as the complete remaining memory and there is no noticeable performance degradation.

    Read the article

  • Do you know when to send a done email in Scrum?

    - by Martin Hinshelwood
    At SSW we have always sent done emails to the owner/requestor to let them know that it is done. Others who are dependent on that tasks are CC’ed so they know they can proceed. But how does that fit into Scrum?   Update 14th April 2010 Rule added to Rules to better Scrum with TFS If you are working on a task: When you complete a Task that is part of a User Story you need to send a done email to the Owner of that Story. You only need to add the Task #, Summary and link to the item in WIWA. Remember that all your tasks should be under 4 hours, do spending lots of time on a Done Email for a Task would be counter productive. Add more information if required, for example you may have completed the task a different way than previously discussed.  Make sure that every User Story has an Owner as per the rules. If you are the owner of a story: When you complete a story you should send a comprehensive done email as per the rules when the story had been completed. Make sure you add a list of all of the Tasks that were completed as part of the story and the Done criteria that you completed. If your done criteria says: Built Successfully 30% Code Coverage All tests passed Then add an illustration to show this. Figure: Show that you have met your Done criteria where possible.   This is all designed to help you Scrum Team members (Product Owner, ScrumMaster and Team) validate the quality of the work that has been completed. Remember that you are not DONE until your team says you are done.   Technorati Tags: SSW,Scrum,SSW Rules

    Read the article

  • juicy couture sale would nanytime go for any added casting

    - by user109129
    Depaccident aloft the achievement and the air-conditionedior of getting, you can admissionment the acclimateed acadding ambiencel as per your acquiesceadeptness.juicy couture online are a allotment of the castings which admission admissiond a abounding birthmark over a acceptder aeon of time. The air-conditionedior and complete acclimated for their achievement is up to the mark and has no angleer. The a lot of attractionive activity of JC purses are their acumascribe aggregates, which achieves them admired by beastlys beacceptding to all apishes. These are simple to be admissionmentd from onbandage retail aliment. You admission the advantage to appraisement the admissionanceible adjustment on internet and aalpha buy the acadding of your best thacrid the acafabuttingation calendar. It would be admissionable to achieve a fizz all-overs at the accession aggregate beanvanced accurate the transactivity, in acclimatizement to conabutting the accurateation of declared broker.Alacquireing, tachievement are so aapprenticeding advantages in the exchange that it has in fact beappear difficult to achieve the acclimateed best, but if you alattainable admissionment juicy couture sale, you would nanytime go for any added casting. Their beside attaccident with adjustment of blooms is the best activitys which a exhausted woman may annual for. The bandage of these tots is in fact able, and they do not aperture out even afterwardss acrid and added acquireing.acclimateed admeaconstantments of Juicy couture battleaccoutrements achieve them applicative to accouterment arbitrary admeaconstantment requirements. Some woman like huge accoutrements while addeds like acclimatized admeaconstantmentd tots. In any case, the cobasicotmentments of these purses are bottomless and admission the acclimateation to acclimatize sanytimeal sbasic activitys. The admirable and dejected bloom abuttals is anadded complete aspect of JC getting. acquire the acclimateed ones for you accordanceing to the accumulating of your accoutermentss, and leave a ambrosial afterwardseffect on addeds. abonfire and acclimateed contrasts are in trend nowacanicule, so accrue yourself up to date by purchasing this air-conditionedb accumulating from Juicy Couture.

    Read the article

  • New Oracle Solaris 11 Administration book

    - by glynn
    During the development of Oracle Solaris 11, one of the main goals was to modernize the operating system and remove some of the existing frustrations that our administrative audience had in deploying and using the platform within data centers around the world. That meant a comprehensive clean out of some existing technologies to provision the operating system (replacing Jumpstart with Automated Installer) and manage system software (replacing SVR4 with IPS packaging), consolidate the vast spectrum of networking configuration, and enhance the user environment to provide familiarity for those who were used to administering Linux environments among many other things. While some considered the changes to Oracle Solaris 11 as a negative change, most will be impressed at how far we've come - the deeper integration of key technologies, presented in a consolidated and consistent form. It is easier to administer the Oracle Solaris platform that ever before, and I have no doubt that administrators coming from other platforms will be hugely impressed with what they see, especially if they're judging based on past experiences of Solaris 8 and Solaris 9. In fact I'd go further to say that Oracle Solaris 11 is a more powerful, integrated and usable platform that most Linux platforms I've seen. But as with anything, there's always an initial learning curve to get through. We've provided a significant selection of learning materials out on the Oracle Solaris 11 pages on Oracle Technology Network and some great training and certification options. One more option is now available in the form of a book, the Oracle Solaris 11 System Administration The Complete Reference. This provides an exceptional reference to help administrators learn about Oracle Solaris 11, especially those who have come from the Linux platform. As is quoted in the first chapter of the guide: Linux users and developers will find in Oracle Solaris 11 a familiar and quickly productive working environment; we point out similarities and differences between the Linux and Solaris kernels and system administration tools, and describe how typical open source Web development tasks are accomplished in this OS. So I would encourage you to take a read of it and start seriously considering Oracle Solaris 11 to be a platform choice for your data center. Oracle Solaris 11 System Administration The Complete Reference - yours for only $32.50 (if you successfully use the promotion code - otherwise worth shopping around to pick up a good deal).

    Read the article

  • Spring to Java EE, Part Three - new tech article on otn/java

    - by Janice J. Heiss
    In a new article up on otn/java, Java EE expert David Heffelfinger continues his series exploring the relative strengths and weaknesses of Java EE and Spring. Here, he demonstrates how easy it is to develop the data layer of an application using Java EE, JPA, and the NetBeans IDE instead of the Spring Framework.In the first two parts of the series, he generated a complete Java EE application by using JavaServer Faces (JSF) 2.0, Enterprise JavaBeans (EJB) 3.1, and Java Persistence API (JPA) 2.0 from Spring’s Pet Clinic MySQL schema, thus showing how easy it is to develop an application whose functionality equaled that of the Spring sample application.In his new article, Heffelfinger tweaks the application to make it more user friendly.From the article:“The generated application displays primary keys on some of the pages, and these keys are surrogate primary keys—meaning that they have no business value and are used strictly as a unique identifier—so there is no reason why they should be visible to the user. In addition, we will modify some of the generated labels to make them more user-friendly.”He concludes the article with a summary:“The Java EE version of the application is not a straight port of the Spring version. For example, the Java EE version enables us to create, update, and delete veterinarians as well as veterinary specialties, whereas the Spring version of the application enables us only to view veterinarians and specialties. Additionally, the Spring version has a single page for managing/viewing owners, pets, and visits, whereas the Java EE version of the application has separate pages for each of these entities.The other thing we should keep in mind is that we didn’t actually write a lot of the code and markup for the Java EE version of the application, because the bulk of it was generated by the NetBeans wizard.” Have a look at the complete article here.

    Read the article

  • Cannot ping any computers on LAN

    - by Timothy
    I havem't been able to find a straight forward answer on this yet. I'm hoping people here are able to help! Keep in mind that I'm a complete beginner at this - this is the first installation i've done for any LINUX systems ever so please keep that in mind when answering this question. We are a complete Windows shop, using nothing but Microsoft products but looking into the value of OpenStalk however have been having problems getting Ubuntu Server installed and speaking to the network. The machine is getting an IP address which is telling me that some sort of DHCP activity is working but I'm not able to ping any computer on our network as well as not able to connect to the internet. Every time I try to ping i'm getting; Destination Host Unreachable I've tried using modifying the resolv.conf file with our static details to match my Windows 7 machine still with no luck. Even tried disabling the firewall on Ubuntu Server 11 and no luck. Any ideas? Please let me know if there is any information you need from the server and I'll post up.

    Read the article

  • Contract Work - Lessons Learned

    - by samerpaul
    I thought I would write a post of a different nature today, but still relevant to the tech world. I do a lot of contract jobs myself and really enjoy it. It's nice to keep jumping from project to project, and not having to go to an office or keep regular hours, etc. I really enjoy it. I have learned a lot in the past few years of doing it (both from experience and from help given to me from others, and the internet) so I thought I'd share some of that knowledge/experience today.So here's my own personal "lesson's learned" that hopefully will help you if you find yourself doing contract work:Should I take the job?Ok, so this is the first step. Assuming you were given sufficient information about what they want, then you should really think about what you're capable of doing and whether or not you should take this job. Personally, my rule is, if I know it's possible, I'll say yes, even if I don't yet know how to do it. That's because the internet is such a great help, it would be rare to run into an issue that you can't figure out with some help. So if your clients are asking for something that you don't yet know how to program, but you know you can do it on the platform then go for it. How else are you going to learn?Use this rule with some limitation, however. If you're really lacking the expertise or foundation in something, then unless you have tons of time to complete the project, then I wouldn't say yes. For example, I haven't personally done any 3d/openGL programming yet so I wouldn't say yes to a project that extensively uses it. OK, so I want the job, but how much do I charge?This part can be tricky. There is no set formula really, but I have some tips for pricing that will hopefully give you a better idea on how to confidently ask your price and have them accept. Here are some personal guidelinesHow much time do you have to complete the project? If it's shorter than average, then charge more. You can even make a subtle note about this (or not so subtle if they still don't get it.) If it seems too short of a time (i.e. near impossible to complete), be sure to say that. It looks bad to promise a time that you can't keep--and it makes it less likely for them to return to you for work.Your Hourly rate: How long have you been working in that language? Do you have existing projects to back you up? Or previous contacts that can vouch for your work? Are there very few people with your particular skill set? All of these things will lend themselves to setting an hourly rate. I'd also try out a quick google search of what your line of work is, to see what the industry standard is at that point in time.I wouldn't price too low, because you want to make your time worth it. You also want them to feel like they're paying for quality work (assuming you can deliver it :) ). Finally, think about your client. If it's a small business, then don't price it too high if you want the job. If it's an enterprise (like a Fortune company), then don't be afraid to price higher. They have the budget for it.Fixed price: If they want a fixed price project, then you need to think about how many hours it will take you to complete it and multiply it by the hourly rate you set for yourself. Then, honestly, I would add 10-20% on top of that. Why? Because nothing ever works exactly how you want it to. There are lots of times that something "trivial" is way harder than it should be, or something that "should work" doesn't for hours and it eats away at your hourly rate. I can't count the number of times I encountered a logical bug that took away an entire's day work because debuggers don't help in those cases. By adding that padding in, it's still OK to have those days where you don't get as much done as you want. And another useful tip: Depending on your client, and the scope, you most likely want to set that you both sign off on a specification sheet before doing any work, and that any changes will result in a re-evaulation of the price. This is to help protect you from being handed a huge new addition to the project half-way in, without any extra payment.Scope of project: Finally, is it a huge project? Is it really small/fast? This affects how much your client will be willing to pay. If it sounds big, they will be willing to pay more for it. If it seems really small, then you won't be able to get away with a large asking price (as easily).Ok, I priced it, now what?So now that you have the price, you want to make sure it feels justified to your client. I never set a price before I can really think about everything. For example, if you're still in your introduction phase, and they want a price, don't give one! Just comment that you will send them a proposal sheet with all the features outlined, and a price for everything. You don't want to shout out a low number and then deliver something that is way higher. You also don't want to shock them with a big number before they feel like they are getting a great product.Make up a proposal document in a word editor. Personally, I leave the price till the very end. Why? Because by the time they reach the end, you've already discussed all the great features you plan to implement, and how it's the best product they'll ever use, etc etc...so your price comes off as a steal! If you hit them up front with a price, they will read through the document with a negative bias. Think about those commercials on TV. They always go on about their product, then at the end, ask "What would you pay for something like this? $100? $50? How about $20!!". This is not by accident.Scenario: I finished the job way earlier than expectedYou have two options then. You can either polish the hell out of the application, and even throw in a few bonus features (assuming they are in-line with the customer's needs) or you can sit and wait on it until you near your deadline. Why don't you want to turn it in too early? Because you should treat that extra time as a surplus. If you said it is going to take you 3 weeks, and it took you only 1, you have a surplus of 2 weeks. I personally don't want to let them know that I can do a 3 week project in 1 week. Why not? Because that may not always be the case! I may later have a 3 week project that takes all 3 weeks, but if I set a precedent of delivering super early, then the pressure is on for that longer project. It also makes it harder to quote longer times if you keep delivering too early.Feel free to deliver early, but again, don't do it too early. They may also wonder why they paid you for 3 weeks of work if you're done in 1. They may further wonder if the product sucks, or what is wrong with it, if it's done so early, etc.I would just polish the application. Everyone loves polish in their applications. The smallest details are what make an application go from "functional" to "fantastic". And since you are still delivering on time, then they are still going to be very happy with you.Scenario: It's taking way too long to finish this, and the deadline is nearing/here!So this is not a fun scenario to be in, but it'll happen. Sometimes the scope of the project gets out of hand. The best policy here is OPENNESS/HONESTY. Tell them that the project is taking longer than expected, and give a reasonable time for when you think you'll have it done. I typically explain it in a way that makes it sound like it isn't something that I did wrong, but it's just something about the nature of the project. This really goes for any scenario, to be honest. Just continue to stay open and communicative about your progress. This doesn't mean that you should email them every five minutes (unless they want you to), but it does mean that maybe every few days or once a week, give them an update on where you're at, and what's next. They'll be happy to know they are paying for progress, and it'll make it easier to ask for an extension when something goes wrong, because they know that you've been working on it all along.Final tips and thoughts:In general, contract work is really fun and rewarding. It's nice to learn new things all the time, as mandated by the project ,and to challenge yourself to do things you may not have done before. The key is to build a great relationship with your clients for future work, and for recommendations. I am always very honest with them and I never promise something I can't deliver. Again, under promise, over deliver!I hope this has proved helpful!Cheers,samerpaul

    Read the article

  • Oracle Database Appliance and Remarketers - A Whole New Opportunity

    - by Martin Morganti
    We have recently had another exciting announcement for the remarketer initiative. The Oracle Database Appliance is now available for resale through your authorized Value Added Distributor. This means that with no fees, no barriers and no upfront commitments, a remarketer can identify and close transactions for the Oracle Database Appliance without having to sign an Oracle contract. In addition, the remarketer is now authorized to sell the Oracle Database Appliance with Oracle Database 11G Enterprise Edition, RAC or RAC One Node software included in the transaction. The availability of the Oracle Database Appliance for remarketers means that a broad base of customers through remarketers can now start engaging with Oracle by taking advantage of this engineered system technology; basically Oracle Hardware and Software Engineered to Work Together to drive incremental revenue. The Oracle Database Appliance is a simple, reliable and affordable way to rapidly deploy a database cluster by plugging in the power; the network and pushing the one button install for the Appliance Manager software. Find out more about the Oracle Database Appliance, including a webinar by Oracle President Mark Hurd, as well as other material to help you better understand the opportunity available to you as remarketers. If you want Oracle to keep you informed about developments in Remarketer complete the free registration, or talk to your local Oracle Remarketer Authorized value added distributors. (Complete list available on the Oracle Remarketer page).

    Read the article

  • CodePlex Daily Summary for Tuesday, August 05, 2014

    CodePlex Daily Summary for Tuesday, August 05, 2014Popular ReleasesGMare: GMare Beta 1.3: Fixed: Sprites not being read inLib.Web.Mvc & Yet another developer blog: Lib.Web.Mvc 6.4.2: Lib.Web.Mvc is a library which contains some helper classes for ASP.NET MVC such as strongly typed jqGrid helper, XSL transformation HtmlHelper/ActionResult, FileResult with range request support, custom attributes and more. Release contains: Lib.Web.Mvc.dll with xml documentation file Standalone documentation in chm file and change log Library source code Sample application for strongly typed jqGrid helper is available here. Sample application for XSL transformation HtmlHelper/ActionRe...Virto Commerce Enterprise Open Source eCommerce Platform (asp.net mvc): Virto Commerce 1.11: Virto Commerce Community Edition version 1.11. To install the SDK package, please refer to SDK getting started documentation To configure source code package, please refer to Source code getting started documentation This release includes many bug fixes and minor improvements. More details about this release can be found on our blog at http://blog.virtocommerce.com.Json.NET: Json.NET 6.0 Release 4: New feature - Added Merge to LINQ to JSON New feature - Added JValue.CreateNull and JValue.CreateUndefined New feature - Added Windows Phone 8.1 support to .NET 4.0 portable assembly New feature - Added OverrideCreator to JsonObjectContract New feature - Added support for overriding the creation of interfaces and abstract types New feature - Added support for reading UUID BSON binary values as a Guid New feature - Added MetadataPropertyHandling.Ignore New feature - Improv...Aitso-a platform for spatial optimization and based on artificial immune systems: Aitso_0.14.08.01: Aitso0.14.08.01Installer.zipVidCoder: 1.5.24 Beta: Added NL-Means denoiser. Updated HandBrake core to SVN 6254. Added extra error handling to DVD player code to avoid a crash when the player was moved.EasyMR: v1.0: 1 ????????? 2 ????????? 3 ??????? ????????????????????,???????????????,???????????PowerShell App Deployment Toolkit: PowerShell App Deployment Toolkit v3.1.5: *Added Send-Keys function to send a sequence of keys to an application window (Thanks to mmashwani) *Added 3 optimization/stability improvements to Execute-Process following MS best practice (Thanks to mmashwani) *Fixed issue where Execute-MSI did not use value from XML file for uninstall but instead ran all uninstalls silently by default *Fixed error on 1641 exit code (should be a success like 3010) *Fixed issue with error handling in Invoke-SCCMTask *Fixed issue with deferral dates where th...Facebook Graph Toolkit: Facebook Graph Toolkit 5.0.493: updated to Graph Api v2.0 updated class definitions according to latest documentation fixed a potential memory leak causing socket exhaustion LINQ to FQL simplified usage performance improvement added missing constructorsCrashReporter.NET : Exception reporting library for C# and VB.NET: CrashReporter.NET 1.4: Added French, Italian, German, Russian and Spanish translation. Added DeveloperMessage field so developers can send value of variables or other details they want along with crash report. Fixed bug where subject of crash report email was affected by translation. Fixed bug where extension is not added in SaveFileDialog while saving crash report.AutoUpdater.NET : Auto update library for VB.NET and C# Developer: AutoUpdater.NET 1.3: Fixed problem in DownloadUpdateDialog where download continues even if you close the dialog. Added support for new url field for 64 bit application setup. AutoUpdater.NET will decide which download url to use by looking at the value of IntPtr.Size. Added German translation provided by Rene Kannegiesser. Now developer can handle update logic herself using event suggested by ricorx7. Added italian translation provided by Gianluca Mariani. Fixed bug that prevents Application from exiti...Windows Embedded Board Support Package for BeagleBone: WEC2013 Beaglebone Black 03.02.00: Demo image. Runs on BeagleBone White and Black. On BeagleBone Black works with uSD or eMMC based images. XAML support added. SDK and demo tests included. New SDK refresh (uninstall old version first) IoT M2MQTT client built in. Built and maintained in the U.S.A.!Essence#: Philemon-3 (Alpha Build 19): The Philemon-3 release corrects a problem with the script of the NSIS installation program generator used to build the installer for the previous release. That issue could have been addressed by simply fixing the script and then regenerating the installation program, but a) it was judged to be less work to create a new release based on the current development code, and b) a new release does a better job of communicating the fact that the installation program for the previous release is broken...SEToolbox: SEToolbox 01.041.012 Release 1: Added voxel material textures to read in with mods. Fixed missing texture replacements for mods. Fixed rounding issue in raytrace code. Fixed repair issue with corrupt checkpoint file. Fixed issue with updated SE binaries 01.041.012 using new container configuration.Continuous Affect Rating and Media Annotation: CARMA v6.02: Compiled as v6.02 for MCR 8.3 (2014a) 32-bit Windows version Ratings will now only be saved for full seconds (e.g., a 90.2s video will now yield 90 ratings instead of 91 ratings). This should be more transparent to users and ensure consistency in the number of ratings output by different computers. Separated the XLS and XLSX output formats in the save file drop-down box. This should be more transparent to users than having to add the appropriate extension.Magick.NET: Magick.NET 6.8.9.601: Magick.NET linked with ImageMagick 6.8.9.6 Breaking changes: - Changed arguments for the Map method of MagickImage. - QuantizeSettings uses Riemersma by default.Wallpapr: Wallpapr 1.7: Fixed the "Prohibited" message (Flickr upgraded their API to SSL-only).DbEntry.Net (Leafing Framework): DbEntry.Net 4.2: DbEntry.Net is a lightweight Object Relational Mapping (ORM) database access compnent for .Net 4.0+. It has clearly and easily programing interface for ORM and sql directly, and supoorted Access, Sql Server, MySql, SQLite, Firebird, PostgreSQL and Oracle. It also provide a Ruby On Rails style MVC framework. Asp.Net DataSource and a simple IoC. DbEntry.Net.v4.2.Setup.zip include the setup package. DbEntry.Net.v4.2.Src.zip include source files and unit tests. DbEntry.Net.v4.2.Samples.zip ...Azure Storage Explorer: Azure Storage Explorer 6 Preview 1: Welcome to Azure Storage Explorer 6 Preview 1 This is the first release of the latest Azure Storage Explorer, code-named Phoenix. What's New?Here are some important things to know about version 6: Open Source Now being run as a full open source project. Full source code on CodePlex. Collaboration encouraged! Updated Code Base Brand-new code base (WPF/C#/.NET 4.5) Visual Studio 2013 solution (previously VS2010) Uses the Task Parallel Library (TPL) for asynchronous background operat...Wsus Package Publisher: release v1.3.1407.29: Updated WPP to recognize the very latest console version. Some files was missing into the latest release of WPP which lead to crash when trying to make a custom update. Add a workaround to avoid clipboard modification when double-clicking on a label when creating a custom update. Add the ability to publish detectoids. (This feature is still in a BETA phase. Packages relying on these detectoids to determine which computers need to be updated, may apply to all computers).New ProjectsAPS - Folha de Pagamento: Folha de PagamentoCIS470 Metrics Tracking: A data driven application designed to display various project progress metrics and reports based off of data. CIS470 Metrics Tracking DeVry Capstone project.Excel Converter: A great tools to convert MS excel to PDF and vice verse. We plan support both Window and Web version.FreightSoft: A Web Based Shipping and Freight Management SystemGoogle .Net API: Dot Net warp for Google API Services. API Demo, documentation, simple useful tools. Welcome to join us make more google API works on Dot Net World. Google Driver downloader: A free tool to bulk download file from google driver to local box.Omniprogram: OmniprogramOnline OCR: A free to OCR via google APIOnline Resume Parsing Using Aspose.Words for .NET: Online Resume Parsing Using Aspose.Words for .NET. PDF Text Searcher: This application searches for a word from the PDF file name inside the PDF itself.PEIN Framework: A collection of day to day required libraries.Public Components: Public Components, by Danny VarodRole Based View for Microsoft Dynamic CRM 2011 & 2013: Allow admin to configure entity's view based on role.scmactivitytfs: Test project to play with Team Foundation Server.Selfpub.CrmSolution: Tram-pam-pamWindGE: WindGE????DirectX11??????3D???????,????????????????????,???WPF???????。yao's project: a project for personal study

    Read the article

  • Oracle Solaris Zones Physical to virtual (P2V)

    - by user939057
    IntroductionThis document describes the process of creating and installing a Solaris 10 image build from physical system and migrate it into a virtualized operating system environment using the Oracle Solaris 10 Zones Physical-to-Virtual (P2V) capability.Using an example and various scenarios, this paper describes how to take advantage of theOracle Solaris 10 Zones Physical-to-Virtual (P2V) capability with other Oracle Solaris features to optimize performance using the Solaris 10 resource management advanced storage management using Solaris ZFS plus improving operating system visibility with Solaris DTrace. The most common use for this tool is when performing consolidation of existing systems onto virtualization enabled platforms, in addition to that we can use the Physical-to-Virtual (P2V) capability  for other tasks for example backup your physical system and move them into virtualized operating system environment hosted on the Disaster Recovery (DR) site another option can be building an Oracle Solaris 10 image repository with various configuration and a different software packages in order to reduce provisioning time.Oracle Solaris ZonesOracle Solaris Zones is a virtualization and partitioning technology supported on Oracle Sun servers powered by SPARC and Intel processors.This technology provides an isolated and secure environment for running applications. A zone is a virtualized operating system environment created within a single instance of the Solaris 10 Operating System.Each virtual system is called a zone and runs a unique and distinct copy of the Solaris 10 operating system.Oracle Solaris Zones Physical-to-Virtual (P2V)A new feature for Solaris 10 9/10.This feature provides the ability to build a Solaris 10 images from physical system and migrate it into a virtualized operating system environmentThere are three main steps using this tool1. Image creation on the source system, this image includes the operating system and optionally the software in which we want to include within the image. 2. Preparing the target system by configuring a new zone that will host the new image.3. Image installation on the target system using the image we created on step 1. The host, where the image is built, is referred to as the source system and the host, where theimage is installed, is referred to as the target system. Benefits of Oracle Solaris Zones Physical-to-Virtual (P2V)Here are some benefits of this new feature:  Simple- easy build process using Oracle Solaris 10 built-in commands.  Robust- based on Oracle Solaris Zones a robust and well known virtualization technology.  Flexible- support migration between V series servers into T or -M-series systems.For the latest server information, refer to the Sun Servers web page. PrerequisitesThe target Oracle Solaris system should be running the latest version of the patching patch cluster. and the minimum Solaris version on the target system should be Solaris 10 9/10.Refer to the latest Administration Guide for Oracle Solaris for a complete procedure on how todownload and install Oracle Solaris. NOTE: If the source system that used to build the image is an older version then the targetsystem, then during the process, the operating system will be upgraded to Solaris 10 9/10(update on attach).Creating the Image Used to distribute the software.We will create an image on the source machine. We can create the image on the local file system and then transfer it to the target machine, or build it into a NFS shared storage andmount the NFS file system from the target machine.Optional  before creating the image we need to complete the software installation that we want to include with the Solaris 10 image.An image is created by using the flarcreate command:Source # flarcreate -S -n s10-system -L cpio /var/tmp/solaris_10_up9.flarThe command does the following:  -S specifies that we skip the disk space check and do not write archive size data to the archive (faster).  -n specifies the image name.  -L specifies the archive format (i.e cpio). Optionally, we can add descriptions to the archive identification section, which can help to identify the archive later.Source # flarcreate -S -n s10-system -e "Oracle Solaris with Oracle DB10.2.0.4" -a "oracle" -L cpio /var/tmp/solaris_10_up9.flarYou can see example of the archive identification section in Appendix A: archive identification section.We can compress the flar image using the gzip command or adding the -c option to the flarcreate commandSource # gzip /var/tmp/solaris_10_up9.flarAn md5 checksum can be created for the image in order to ensure no data tamperingSource # digest -v -a md5 /var/tmp/solaris_10_up9.flar Moving the image into the target system.If we created the image on the local file system, we need to transfer the flar archive from the source machine to the target machine.Source # scp /var/tmp/solaris_10_up9.flar target:/var/tmpConfiguring the Zone on the target systemAfter copying the software to the target machine, we need to configure a new zone in order to host the new image on that zone.To install the new zone on the target machine, first we need to configure the zone (for the full zone creation options see the following link: http://docs.oracle.com/cd/E18752_01/html/817-1592/index.html  )ZFS integrationA flash archive can be created on a system that is running a UFS or a ZFS root file system.NOTE: If you create a Solaris Flash archive of a Solaris 10 system that has a ZFS root, then bydefault, the flar will actually be a ZFS send stream, which can be used to recreate the root pool.This image cannot be used to install a zone. You must create the flar with an explicit cpio or paxarchive when the system has a ZFS root.Use the flarcreate command with the -L archiver option, specifying cpio or pax as themethod to archive the files. (For example, see Step 1 in the previous section).Optionally, on the target system you can create the zone root folder on a ZFS file system inorder to benefit from the ZFS features (clones, snapshots, etc...).Target # zpool create zones c2t2d0 Create the zone root folder:Target # chmod 700 /zones Target # zonecfg -z solaris10-up9-zonesolaris10-up9-zone: No such zone configuredUse 'create' to begin configuring a new zone.zonecfg:solaris10-up9-zone> createzonecfg:solaris10-up9-zone> set zonepath=/zoneszonecfg:solaris10-up9-zone> set autoboot=truezonecfg:solaris10-up9-zone> add netzonecfg:solaris10-up9-zone:net> set address=192.168.0.1zonecfg:solaris10-up9-zone:net> set physical=nxge0zonecfg:solaris10-up9-zone:net> endzonecfg:solaris10-up9-zone> verifyzonecfg:solaris10-up9-zone> commitzonecfg:solaris10-up9-zone> exit Installing the Zone on the target system using the imageInstall the configured zone solaris10-up9-zone by using the zoneadm command with the install -a option and the path to the archive.The following example shows how to create an Image and sys-unconfig the zone.Target # zoneadm -z solaris10-up9-zone install -u -a/var/tmp/solaris_10_up9.flarLog File: /var/tmp/solaris10-up9-zone.install_log.AJaGveInstalling: This may take several minutes...The following example shows how we can preserve system identity.Target # zoneadm -z solaris10-up9-zone install -p -a /var/tmp/solaris_10_up9.flar Resource management Some applications are sensitive to the number of CPUs on the target Zone. You need tomatch the number of CPUs on the Zone using the zonecfg command:zonecfg:solaris10-up9-zone>add dedicated-cpuzonecfg:solaris10-up9-zone> set ncpus=16DTrace integrationSome applications might need to be analyzing using DTrace on the target zone, you canadd DTrace support on the zone using the zonecfg command:zonecfg:solaris10-up9-zone>setlimitpriv="default,dtrace_proc,dtrace_user" Exclusive IP stack An Oracle Solaris Container running in Oracle Solaris 10 can have a shared IP stack with the global zone, or it can have an exclusive IP stack (which was released in Oracle Solaris 10 8/07). An exclusive IP stack provides a complete, tunable, manageable and independent networking stack to each zone. A zone with an exclusive IP stack can configure Scalable TCP (STCP), IP routing, IP multipathing, or IPsec. For an example of how to configure an Oracle Solaris zone with an exclusive IP stack, see the following example zonecfg:solaris10-up9-zone set ip-type=exclusivezonecfg:solaris10-up9-zone> add netzonecfg:solaris10-up9-zone> set physical=nxge0 When the installation completes, use the zoneadm list -i -v options to list the installedzones and verify the status.Target # zoneadm list -i -vSee that the new Zone status is installedID NAME STATUS PATH BRAND IP0 global running / native shared- solaris10-up9-zone installed /zones native sharedNow boot the ZoneTarget # zoneadm -z solaris10-up9-zone bootWe need to login into the Zone order to complete the zone set up or insert a sysidcfg file beforebooting the zone for the first time see example for sysidcfg file in Appendix B: sysidcfg filesectionTarget # zlogin -C solaris10-up9-zoneTroubleshootingIf an installation fails, review the log file. On success, the log file is in /var/log inside the zone. Onfailure, the log file is in /var/tmp in the global zone.If a zone installation is interrupted or fails, the zone is left in the incomplete state. Use uninstall -F to reset the zone to the configured state.Target # zoneadm -z solaris10-up9-zone uninstall -FTarget # zonecfg -z solaris10-up9-zone delete -FConclusionOracle Solaris Zones P2V tool provides the flexibility to build pre-configuredimages with different software configuration for faster deployment and server consolidation.In this document, I demonstrated how to build and install images and to integrate the images with other Oracle Solaris features like ZFS and DTrace.Appendix A: archive identification sectionWe can use the head -n 20 /var/tmp/solaris_10_up9.flar command in order to access theidentification section that contains the detailed description.Target # head -n 20 /var/tmp/solaris_10_up9.flarFlAsH-aRcHiVe-2.0section_begin=identificationarchive_id=e4469ee97c3f30699d608b20a36011befiles_archived_method=cpiocreation_date=20100901160827creation_master=mdet5140-1content_name=s10-systemcreation_node=mdet5140-1creation_hardware_class=sun4vcreation_platform=SUNW,T5140creation_processor=sparccreation_release=5.10creation_os_name=SunOScreation_os_version=Generic_142909-16files_compressed_method=nonecontent_architectures=sun4vtype=FULLsection_end=identificationsection_begin=predeploymentbegin 755 predeployment.cpio.ZAppendix B: sysidcfg file sectionTarget # cat sysidcfgsystem_locale=Ctimezone=US/Pacificterminal=xtermssecurity_policy=NONEroot_password=HsABA7Dt/0sXXtimeserver=localhostname_service=NONEnetwork_interface=primary {hostname= solaris10-up9-zonenetmask=255.255.255.0protocol_ipv6=nodefault_route=192.168.0.1}name_service=NONEnfs4_domain=dynamicWe need to copy this file before booting the zoneTarget # cp sysidcfg /zones/solaris10-up9-zone/root/etc/

    Read the article

  • Is a disk/ata timeout exception dangerous?

    - by j-g-faustus
    I have a few hard drives in mdadm RAID 5 configured to go to standby after a few minutes of inactivity. (Using hdparm.conf spindown_time.) At irregular intervals I get messages like these in dmesg: [ 1840.251661] ata4.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [ 1840.251722] ata4.00: failed command: SMART [ 1840.251758] ata4.00: cmd b0/d5:01:06:4f:c2/00:00:00:00:00/00 tag 0 pio 512 in [ 1840.251759] res 40/00:14:50:2e:04/00:00:02:00:00/40 Emask 0x4 (timeout) [ 1840.251858] ata4.00: status: { DRDY } [ 1840.251888] ata4: hard resetting link [ 1840.600742] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 1840.601521] ata4.00: configured for UDMA/133 [ 1840.601547] ata4: EH complete [337877.713988] ata4.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [337877.714019] ata4.00: failed command: SMART [337877.714038] ata4.00: cmd b0/d5:01:06:4f:c2/00:00:00:00:00/00 tag 0 pio 512 in [337877.714039] res 40/00:04:90:10:81/00:00:00:00:00/40 Emask 0x4 (timeout) [337877.714089] ata4.00: status: { DRDY } [337877.714107] ata4: hard resetting link [337878.063085] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [337878.063743] ata4.00: configured for UDMA/133 [337878.063764] ata4: EH complete I think the exception is caused by smartd when a drive does not wake up quickly enough. There are no issues (that I can tell) in accessing the drives normally through the file system - it takes a few seconds longer than normal when they are asleep, but there are no exceptions. Is this something I should worry about, as a potential symptom on something that could corrupt a drive over time? Or can I safely ignore it as part of normal operation? Edit: By request: smartctl -a for sdaand sde, both disks are members of the array. If ata4is the same as scsi-4 then sde is the one that gave the error above, according to /dev/disk/by-path.

    Read the article

  • Transmission shutdown script for multiple torrents?

    - by Khurshid Alam
    I have written a shutdown script for transmission. Transmission calls the script after a torrent download finishes. The script runs perfectly on my machine (Ubuntu 11.04 & 12.04). #!/bin/bash sleep 300s # default display on current host DISPLAY=:0.0 # find out if monitor is on. Default timeout can be configured from screensaver/Power configuration. STATUS=`xset -display $DISPLAY -q | grep 'Monitor'` echo $STATUS if [ "$STATUS" == " Monitor is On" ] ### Then check if its still downloading a torrent. Couldn't figure out how.(May be) by monitoring network downstream activity? then notify-send "Downloads Complete" "Exiting transmisssion now" pkill transmission else notify-send "Downloads Complete" "Shutting Down Computer" dbus-send --session --type=method_call --print-reply --dest=org.gnome.SessionManager /org/gnome/SessionManager org.gnome.SessionManager.RequestShutdown fi exit 0 The problem is that when I'm downloading more than one file, when the first one finishes, transmission executes the script. I would like to do that but after all downloads are completed. I want to put a 2nd check ( right after monitor check) if it is still downloading another torrent. Is there any way to do this?

    Read the article

  • Customer Experience in the Year Ahead

    - by Christina McKeon
    With 2012 coming to an end soon, we find ourselves reflecting on the year behind us and the year ahead. Now is a good time for reflection on your customer experience initiatives to see how far you have come and where you need to go. Looking back on your customer experience efforts this year, were you able to accomplish the following? Customer journey mapping Align processes across the entire customer lifecyle (buying and owning) Connect all functional areas to the same customer data Deliver consistent and personal experiences across all customer touchpoints Make it easy and rewarding to be your customer Hire and develop talent that drives better customer experiences Tie key performance indicators (KPIs) to each of your customer experience objectives This is by no means a complete checklist for your customer experience strategy, but it does help you determine if you have moved in the right direction for delivering great customer experiences. If you are just getting started with customer experience planning or were not able to get to everything on your list this year, consider focusing on customer journey mapping in 2013. This exercise really helps your organization put your customer in the center and understand how everything you do affects that customer. At Oracle, we see organizations in various stages of customer experience maturity all learn a lot when they go through journey mapping. Companies just starting out with customer experience get a complete understanding of what it is like to be a customer and how everything they do affects that customer. And, organizations that are further along with customer experience often find journey mapping helps provide perspective when re-visiting their customer experience strategy. Happy holidays and best wishes for delivering great customer journeys in 2013!

    Read the article

  • Software Licencing [closed]

    - by Craig
    A colleague of mine wanted a means to do something, so it was suggested that I write some software to do this. The software has turned into more than the original specification and is now something rather complex, however it is not fully functional still. My colleague has not paid me anything so far and I am unwilling to continue writing the software until some faith has been reciprocated in my direction, as I have put a lot of effort into writing the software. I am also unwilling to finish the software as I do not want to give away a huge chunk of my time and effort away as free, neither do I want to be under compensated for my efforts. Some concerns I have. If I finish the software, what if the client doesn't pay me anything or pays too little, or what if I write the software to a usable level, but not complete and the client pays me a too little. I have lost my motivation to finish the software, as more and more specifications have been added to the software and I have developed a substantially complex piece of software and been effectively paid nothing. To finish the software, I need motivation, money would do this, however the client doesn't want to pay for something that isn't complete, yet keeps adding more requirements. I seem to be in a catch 22 with this, as I have developed some software on faith and have had no faith reciprocated in my direction. I'm really not sure how to get some payment from the client or on how to develop a licencing model so that I get some money from the client and development resumes.

    Read the article

  • How to decide on a price for the project as a freelancer

    - by Shekhar_Pro
    I have seen similar question on this SE site but none comes close to a sure shot answer and many are rather subjective. So i am taking a website as an example to be more objective for you to decide its development price i should quote for the complete work.I would like to have specific figures. In past I have developed many projects for my classmates (Computer science and few .net) when i was in college and there i just arbitrarily quoted the price i will take depending on my mood and customer's ability to pay.. usually ranging from Rs.500 (about $10 USD) to Rs. 1500 (about $30 USD). I have also developed few websites but that was open-source and free. But this time impressed by my work i have got a client that wants to get a website developed similar to this: [ http://www.jeetle.in/ ]. So taking this website as an example tell me how much should i charge for complete work from designing to payment gateway implementation (Excluding the charge the payment gateway provider will take). Few information you might like to consider. I am the only developer on this project if that makes any difference. And i would be using ASP.Net and MSSQL Express for server side processing and jQuery on client. Time period for development offered is about 4 to 6 Weeks. Its like i know my work but not how much I'm worth

    Read the article

  • How to make and restore incremental snapshots of hard disk

    - by brunopereira81
    I use Virtual Box a lot for distro / applications testing purposes. One of the features I simply love about it is virtual machines snapshots, its saves a state of a virtual machine and is able to restore it to its former glory if something you did went wrong without any problems and without consuming your all hard disk space. On my live systems I know how to create a 1:1 image of the file system but all the solutions I'v known will create a new image of the complete file system. Are there any programs / file systems that are capable of taking a snapshot of a current file system, save it on another location but instead of making a complete new image it creates incremental backups? To easy describe what I want, it should be as dd images of a file system, but instead of only a full backup it would also create incremental. I am not looking for clonezilla, etc. It should run within the system itself with no (or almost none) intervention from the user, but contain all the data of the file systems. I am also not looking for a duplicity backup your all system excluding some folders script + dd to save your mbr. I can do that myself, looking for extra finesse. I'm looking for something I can do before doing massive changes to a system and then if something when wrong or I burned my hard disk after spilling coffee on it I can just boot from a liveCD and restore a working snapshot to a hard disk. It does not need to be daily, it doesn't even need a schedule. Just run once in a while and let it its job and preferably RAW based not file copy based.

    Read the article

  • Does Firefox Phishing Protection / Safebrowsing have any privacy implications?

    - by Nowo
    The current google results are outdated. What is the current status? I was on www.mozilla.org/en-US/legal/privacy/firefox.html and saw "Protection Against Suspected Forgery and Attack Sites Features". Can you translate please more to human speak? First they download a list and compare local... Ok... Here it gets messy "If there is a match, Firefox will check with its third party provider to ensure that the website is still on the blacklist. The information sent between Firefox and its third party provider(s) are hashed URLs. In fact, multiple hashed URLs are sent with the real hash so that the third party provider(s) will not know what site you are visiting." - If that hash were send to mozilla, they would knew which site were accessed? "In order to safeguard your privacy, Firefox will not transmit the complete URL of web pages that you visit to anyone other than Mozilla and its service providers." - In other words, Mozilla and its service providers get all complete URLs (of sites, which were in blacklist)? "While it is possible that a third party service provider may determine the actual URL from the hashed URL sent, Mozilla’s policy is to require [...]" - Privacy is depends on policy rather than technology?

    Read the article

  • Is it possible to procure venture capital based on in-progress ideas? [migrated]

    - by Clay Shannon
    I hope this is not the wrong forum for this question, but I can't find one in the Stack Exchange "family" that would be more appropriate. I have ideas for two web sites which I think will be quite popular (they are totally unrelated to each other). I am a programmer, and a "creative" (photographer, author, musician). So I have the "vision" as well as the technical know-how to bring these websites into being. My "problem" is that I'm champing at the bit to complete them, and don't have much time to work on them (being employed fulltime, etc.) If I continue to work on them in my so-called spare time, it will probably be a year or more before they are both done. If I was in a position to work on them fulltime (IOW, if I had a "silent partner" willing to invest enough money that I could quit my job), I could have them complete in about three months. I would be willing to partner with somebody or some group who would back me financially in this way. My vision/work combined with their monetary investment could bring about "great things" or at least moderately great things. I know you can "crowd fund" startups and so on, but for that you need to expose your idea. My ideas are not something I would want to make public, as somebody might "steal" them. I'm willing to discuss them with serious individual potential investors, though (provided they were willing to sign a non-disclosure agreement). Does anybody have any recommendation on how I might find a suitable partner[s] for this/these ventures?

    Read the article

  • Displaying a Paged Grid of Data in ASP.NET MVC

    This article demonstrates how to display a paged grid of data in an ASP.NET MVC application and builds upon the work done in two earlier articles: Displaying a Grid of Data in ASP.NET MVC and Sorting a Grid of Data in ASP.NET MVC. Displaying a Grid of Data in ASP.NET MVC started with creating a new ASP.NET MVC application in Visual Studio, then added the Northwind database to the project and showed how to use Microsoft's Linq-to-SQL tool to access data from the database. The article then looked at creating a Controller and View for displaying a list of product information (the Model). Sorting a Grid of Data in ASP.NET MVC enhanced the application by adding a view-specific Model (ProductGridModel) that provided the View with the sorted collection of products to display along with sort-related information, such as the name of the database column the products were sorted by and whether the products were sorted in ascending or descending order. The Sorting a Grid of Data in ASP.NET MVC article also walked through creating a partial view to render the grid's header row so that each column header was a link that, when clicked, sorted the grid by that column. In this article we enhance the view-specific Model (ProductGridModel) to include paging-related information to include the current page being viewed, how many records to show per page, and how many total records are being paged through. Next, we create an action in the Controller that efficiently retrieves the appropriate subset of records to display and then complete the exercise by building a View that displays the subset of records and includes a paging interface that allows the user to step to the next or previous page, or to jump to a particular page number, we create and use a partial view that displays a numeric paging interface Like with its predecessors, this article offers step-by-step instructions and includes a complete, working demo available for download at the end of the article. Read on to learn more! Read More >

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >