Search Results

Search found 19480 results on 780 pages for 'do your own homework'.

Page 374/780 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • Spotlight on an Office – Reading TVP offices

    - by Maria Sandu
    This month we’re in the UK at the Reading offices, for ‘Spotlight on an office’. The Reading Office, which is Oracle’s UK Headquarters, is based in Thames Valley Park (TVP), which is a bustling hive of activity that houses many different companies, a gym, and even a nursery. Overlooking the Thames and some of England’s beautiful countryside, this office, just a short free bus ride from Reading Town Centre is in a fantastic location. The offices themselves are made up of 5 different buildings, each with their own car park, restaurant, and design. The main building or TVP 510 as it is referred to, sits resplendent next to an extremely blue (for the UK) pond, filled with large koi-carp that on a sunny day like to come to the surface of the lake and bask. As the main hub of activity, TVP 510 is where you will find our Dry Cleaning service, the Ozone Gym, the main restaurant (which never fails to have someone in it), and the Marquee which sits outside the back amongst the picnic benches, and is where we have Barbeques in the summer time. Another highlight of the Reading Offices is tucked away in TVP530; the home of H20, and our sports and social club. This is the building that can be best be described as having the ‘cool’ vibe, where you can relax and unwind, all whilst sipping a Starbucks (or Costa if you prefer, located in TVP550), and playing a game of Pool in the cafeteria, or alternatively you can sit back and enjoy a seat in one of the luxury massage chairs! If you feel so inclined, you can also hire out an OraBike from any of the TVP offices, and if you are anything like some of my team, cycle from Reading to Bath using the towpath starting in Thames Valley Park. Oracle’s Reading Offices are a great place to work, they are home to a diverse range of people and have great atmosphere which would suit a graduate, intern, or anyone who is looking to come and work for Oracle in the UK.

    Read the article

  • LibGdx efficient data saving/loading?

    - by grimrader22
    Currently, my LibGDX game consists of a 512 x 512 map of Tiles and entities such as players and monsters. I am wondering how to efficiently save and load the data of my levels. At the moment I am using JSON serialization for each class I want to save. I implement the Json.Serializable interface for all of these classes and write only the variables that are necessary. So my map consists of 512 x 512 tiles, that's 260,000 tiles. Each tile on the map consists of a Tile object, which points to some final Tile object like a GRASS_TILE or a STONE_TILE. When I serialize each level tile, the final Tile that it points to is re-serialized over and over again, so if I have 100 Tiles all pointing to GRASS_TILE, the data of GRASS_TILE is written 100 times over. When I go to load/deserialize my objects, 100 GrassTile objects are created, but they are each their own object. They no longer point to the final tile object. I feel like this reading/writing files very slow. If I were to abandon JSON serialization, to my knowledge my next best option would be saving the level data to a sql database. Unless there is a way to speed up serializing/deserializing 260,000 tiles I may have to do this. Is this a good idea? Could I really write that many tiles to the database efficiently? To sum all this up, I am trying to save my levels using JSON serialization, but it is VERY slow. What other options do I have for saving the data of so many tiles. I also must note that the JSON serialization is not slow on a PC, it is only VERY slow on a mobile device. Since file writing/reading is so slow on mobile devices, what can I do?

    Read the article

  • SQL SERVER – Follow up – Usage of $rowguid and $IDENTITY

    - by pinaldave
    The most common question I often receive is why do I blog? The answer is even simpler – I blog because I get an extremely constructive comment and conversation from people like DHall and Kumar Harsh. Earlier this week, I shared a conversation between Madhivanan and myself regarding how to find out if a table uses ROWGUID or not? I encourage all of you to read the conversation here: SQL SERVER – Identifying Column Data Type of uniqueidentifier without Querying System Tables. In simple words the conversation between Madhivanan and myself brought out a simple query which returns the values of the UNIQUEIDENTIFIER  without knowing the name of the column. David Hall wrote few excellent comments as a follow up and every SQL Enthusiast must read them first, second and third. David is always with positive energy, he first of all shows the limitation of my solution here and here which he follows up with his own solution here. As he said his solution is also not perfect but it indeed leaves learning bites for all of us – worth reading if you are interested in unorthodox solutions. Kumar Harsh suggested that one can also find Identity Column used in the table very similar way using $IDENTITY. Here is how one can do the same. DECLARE @t TABLE ( GuidCol UNIQUEIDENTIFIER DEFAULT newsequentialid() ROWGUIDCOL, IDENTITYCL INT IDENTITY(1,1), data VARCHAR(60) ) INSERT INTO @t (data) SELECT 'test' INSERT INTO @t (data) SELECT 'test1' SELECT $rowguid,$IDENTITY FROM @t There are alternate ways also to find an identity column in the database as well. Following query will give a list of all column names with their corresponding tablename. SELECT SCHEMA_NAME(so.schema_id) SchemaName, so.name TableName, sc.name ColumnName FROM sys.objects so INNER JOIN sys.columns sc ON so.OBJECT_ID = sc.OBJECT_ID AND sc.is_identity = 1 Let me know if you use any alternate method related to identity, I would like to know what you do and how you do when you have to deal with Identity Column. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Advantages of Hudson and Sonar over manual process or homegrown scripts.

    - by Tom G
    My coworker and I recently got into a debate over a proposed plan at our workplace. We've more or less finished transitioning our Java codebase into one managed and built with Maven. Now, I'd like for us to integrate with Hudson and Sonar or something similar. My reasons for this are that it'll provide a 'zero-click' build step to provide testers with new experimental builds, that it will let us deploy applications to a server more easily, that tools such as Sonar will provide us with well-needed metrics on code coverage, Javadoc, package dependencies and the like. He thinks that the overhead of getting up to speed with two new frameworks is unacceptable, and that we should simply double down on documentation and create our own scripts for deployment. Since we plan on some aggressive rewrites to pay down the technical debt previous developers incurred (gratuitous use of Java's Serializable interface as a file storage mechanism that has predictably bit us in the ass) he argues that we can document as we go, and that we'll end up changing a large swath of code in the process anyways. I contend that having accurate metrics that Sonar (or fill in your favorite similar tool) provide gives us a good place to start for any refactoring efforts, not to mention general maintenance -- after all, knowing which classes are the most poorly documented, even if it's just a starting point, is better than seat-of-the-pants guessing. Am I wrong, and trying to introduce more overhead than we really need? Some more background: an alumni of our company is working at a Navy research lab now and suggested these two tools in particular as one they've had great success with using. My coworker and I have also had our share of friendly disagreements before -- he's more of the "CLI for all, compiles Gentoo in his spare time and uses Git" and I'm more of a "Give me an intuitive GUI, plays with XNA and is fine with SVN" type, so there's definitely some element of culture clash here.

    Read the article

  • Mounting windows shares with Active Directory permissions

    - by Jamie
    I've managed to get my Ubuntu (server 10.04 beta 2) box to accept logins from users with Active Directory credentials, now I'd like those users to access there permissible windows shares on a W2003 R2 server. The Windows share ("\srv\Users\") has subdirectories named according to the domain account users and permissions are set accordingly. I would like to preserve these permissions, but don't know how to go about it. Would I mount as an AD administrator or have each user mount with there own AD credentials? How do determine between using mount.smbfs or mount.cifs?

    Read the article

  • Manager Self Service at your Fingertips

    - by Elaine Clement
    Last week we released new and improved Manager Self Service capabilities in PeopleSoft HCM 9.1. We delivered a new Manager Dashboard, streamlined many Manager Self Service transactions, provided new Pivot Grid capabilities, and implemented one-click Related Actions accessible from multiple places – all with the goal of improving every Manager’s self service experience. Manager Dashboard These new capabilities have the potential to significantly impact an organization’s bottom line, and here is why. Increased Efficiency The Manager Dashboard provides a ‘one-stop shop’ for your Managers with all of the key data they need consolidated into a single view. Alerts notifying managers of important tasks are immediately viewable and actionable. Administrators can configure the dashboard to include the most important pagelets needed for their organization, and Managers can personalize it to fit within their personal way of conducting their tasks. The Related Actions feature further improves the ease with which Managers get their work done by providing one-click access to Manager Self Service transactions.  Increased Job Satisfaction The streamlined Manager transactions, related actions, and the new Manager Dashboard provide an enhanced user experience. Managers are able to quickly get in, get the information they need, complete their transactions, and get out. Managers can spend their time focusing on getting the business results they need instead of their day to day HR tasks. Enhanced Decision Support Administrators can ensure the information and analytics they want their Managers to use are available from the Manager Dashboard, establishing best business practices. Additional pivot grids relevant to your own organization can be added to the Manager Dashboard. With this easy access to the relevant information in an easily understood format, Managers can make the right business decisions needed to improve their team and their team’s productivity. For more details on the Manager Dashboard and some of the other newly posted features, such as a new Talent Summary, check out this video and others: Oracle PeopleSoft Webcasts

    Read the article

  • Is there a design pattern for chained observers?

    - by sharakan
    Several times, I've found myself in a situation where I want to add functionality to an existing Observer-Observable relationship. For example, let's say I have an Observable class called PriceFeed, instances of which are created by a variety of PriceSources. Observers on this are notified whenever the underlying PriceSource updates the PriceFeed with a new price. Now I want to add a feature that allows a (temporary) override to be set on the PriceFeed. The PriceSource should still update prices on the PriceFeed, but for as long as the override is set, whenever a consumer asks PriceFeed for it's current value, it should get the override. The way I did this was to introduce a new OverrideablePriceFeed that is itself both an Observer and an Observable, and that decorates the actual PriceFeed. It's implementation of .getPrice() is straight from Chain of Responsibility, but how about the handling of Observable events? When an override is set or cleared, it should issue it's own event to Observers, as well as forwarding events from the underlying PriceFeed. I think of this as some kind of a chained observer, and was curious if there's a more definitive description of a similar pattern.

    Read the article

  • Portable scripting language for a multi-server admin?

    - by Aaron
    Please Note: Portable as in portableapps.com, not the traditional definition. Originally posted on stackoverflow.com, asking here at another user's suggestion. I'm a DBA and sysadmin, mostly for Windows machines running SQL Server. I'm looking for a programming/scripting language for Windows that doesn't require Admin access or an installer, needing no install process other than expanding it into a folder. My intent is to have a language for automation on which I can standardize. Up to this point, I've been using a combination of batch files and Unix shell, using sh.exe from UnxUtils but it's far from a perfect solution. I've evaluated a handful of options, all of them have at least one serious shortcoming or another. I have a strong preference for something open source or dual license, but I'm more interested in finding the right tool than anything else. Not interested that anything that relies on Cygwin or Java, but at this point I'd be fine with something that needs .NET. Requirements: Manageable footprint (1-100 files, under 30 MB installed) Run on Windows XP and Server (2003+) No installer (exe, msi) Works with external pipes, processes, and files Support for MS SQL Server or ODBC connections Bonus Points: Open Source FFI for calling functions in native DLLs GUI support (native or gtk, wx, fltk, etc) Linux, AIX, and/or OS X support Dynamic, object oriented and/or functional, interpreted or bytecode compiled; interactive development Able to package or compile scripts into executables So far I've tried: Ruby: 148 MB on disk, 23000 files Portable Python: 54 MB on disk, 2800 files Strawberry Perl: 123 MB on disk, 3600 files REBOL: Great, except closed source and no MSSQL or ODBC in free version Squeak Smalltalk: Great, except poor support for scripting ---- cut: points of clarification ---- Why all the limitations? I realize some of my criteria seem arbitrarily confining. It's primarily a product my environment. I work as a SQL Server DBA and backup Unix admin at a division of a large company. In addition to near a hundred boxes running some version or another of SQL Server on Windows, I also support the SQL Server Express Edition installs on over a thousand machines in the field. Because of our security policies, I don't login rights on every machine. Often enough, an issue comes up and I'm given local Admin for some period of time. Often enough, it's some box I've never touched and don't have my own environment setup yet. I may have temporary admin rights on the box, but I'm not the admin for the machine- I'm just the DBA. I've no interest in stepping on the toes of the Windows admins, nor do I want to take over any of their duties. If I bring up "installing" something, suddenly it becomes a matter of interest for Production Control and the Windows admins; if I'm copying up a script, no one minds. The distinction may not mean much to the readers, but if someone gets the wrong idea I've suddenly got a long wait and significant overhead before I can get the tool installed and get the problem solved. That's why I want something that can be copied and run in the manner of a portable app. What about the small footprint? My company has three divisions, each in a different geographical location, and one of them is a new acquisition. We have different production control/security policies in each division. I support our MSSQL databases in all three divisions. The field machines are spread around the US, sometimes connecting to the VPN over very slow links. Installing Ruby \using psexec has taken a long time over these connections. In these instances, the bigger time waster seems to be archives with thousands and thousands of files rather than their sheer size. You could say I'm spoiled by Unix, where the admins usually have at least some modern scripting language installed; I'd use PowerShell, but I don't know it well and more importantly it isn't everywhere I need to work. It's a regular occurrence that I need to write, deploy and execute some script on short notice on some machine I've never on which logged in. Since having Ruby or something similar installed on every machine I'll ever need to touch is effectively impossible because of the approvals, time and and Windows admin labor needed I makes more sense find a solution that allows me to work on my own terms.

    Read the article

  • WordPress is now nicely supported on SQL Server (and SQL Azure for that matter)

    - by Eric Nelson
    WordPress is enormously popular for blogs and full websites thanks to an awesome eco system which has built up around it, the simplicity (relatively) of getting it up and running plus the flexibility to “bend it” in all sorts of directions. When I say bend, check out the following which are all WordPress sites My “back up blog” http://iupdateable.wordpress.com/  My groups “odd site” :) http://ubelly.com My favourite “cheap games” site http://www.frugalgaming.co.uk/  WordPress users typically run their sites on Linux and MySQL, although PHP (the language in which WordPress is written) can be happily run on Windows. Both fine technologies in their own right, but for me (and probably a fair few others) I would love to use WordPress but with the technologies I know best (aka Windows, IIS and SQL Server). However, that has proven to be actually rather tricky in practice to get working – until now. Earlier last month OmniTI released a patch for WordPress which provides SQL Server and SQL Azure support.  In parallel with that some fine folks inside Microsoft have also created http://wordpress.visitmix.com which contains information about running WordPress on the Microsoft platform with a particular focus on SQL Server and SQL Azure.  Top stuff! To run WordPress with SQL Server: Download and Install the WordPress on SQL Server Distro/Patch And then you will quite likely need to migrate: Check out how to Migrate to Windows and SQL Server by Zach Owens who is moving his blog to Windows and SQL Server Enjoy Related Links Running PHP on IIS on Windows http://php.iis.net/  If PHP is not your thing, then the following Blog engines are .NET based BlogEngine http://www.dotnetblogengine.net/ DasBlog http://www.dasblog.info/ Subtext http://subtextproject.com/ (which happens to power http://geekswithblogs.net where my main blog is http://geekswithblogs.net/iupdateable)

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • Are there open source alternatives to Bitbucket, Github, Kiln, and similar DVCS browsing and management tools?

    - by Ryan Taylor
    I am aware of several tools/services that provide DVCS browsing and management such as Bitbucket, Github, Kiln, SCM-Manager and Rhodecode. However, the use case I am considering is one such that: Any source code must reside on an employers internal servers. The solution must be open source. It should provide a Bitbucket or Github like experience, including a project wiki, repository browsing and management, and social coding aspects such as code review. The solution should have mercurial support (if not support for other DVCSs). Of these, only SCM-Manager and RhodeCode come close as they can be installed on your own servers and are open source. However they do not have the Bitbucket or Github experience. There is no issue tracker or wiki and the UI, while functional, is not up to par with Github or Bitbucket. I can get close with Trac or Redmine with their repository browsers but unfortunately they do not have any repository management capabilities. Are there other open source tools out there that would provide a similar experience to Bitbucket, Github or Kiln?

    Read the article

  • Ubuntu Server 12.04 as a router. Problem with DNS

    - by Lorenzo
    I have a virtualbox lab made up of 4 Windows 2008 R2 servers (DC/DNS,SQL,SHAREPOINT, EXCHANGE) that are configured with static ip addresses with NIC's attached to Internal network. Everything works. I had the requirement to execute some tests that also access external services available on the internet. To keep things clean and similar to the production environment I have installed another VM, with Ubuntu Server 12.04 64 bit and configured (I hope) to work as a router like described on this post. This VM has two network interfaces: first is Bridged with the host and is used as a WAN connection and the other one attached in the Internal Network with its own static IP address on the internal network subnet. But actually the Windows servers does not connect to the internet while the unix one connects. I did a route command. this is the result: Kernel IP Routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 10.69.121.1 0.0.0.0 UG 100 0 0 eth0 10.69.121.0 * 255.255.255.0 U 0 0 0 eth0 192.168.83.0 * 255.255.255.0 U 0 0 0 eth1 Can somebody help me with this configuration? :) Thanks! Addendum: I forgot to mention that one of the windows server hosts a DNS service for which I should maybe configure a forwarding server but I do not exactly know which server to forward on... :(

    Read the article

  • Are very short or abbreviated method/function names that don't use full words bad practice or a matter of style.

    - by Alb
    Is there nowadays any case for brevity over clarity with method names? Tonight I came across the Python method repr() which seems like a bad name for a method to me. It's not an English word. It apparently is an abbreviation of 'representation' and even if you can deduce that, it still doesn't tell you what the method does. A good method name is subjective to a certain degree, but I had assumed that modern best practices agreed that names should be at least full words and descriptive enough to reveal enough about the method that you would easily find one when looking for it. Method names made from words help let your code read like English. repr() seems to have no advantages as a name other than being short and IDE auto-complete makes this a non-issue. An additional reason given in an answer is that python names are brief so that you can do many things on one line. Surely the better way is to just extract the many things to their own function, and repeat until lines are not too long. Are these just a hangover from the unix way of doing things? Commands with names like ls, rm, ps and du (if you could call those names) were hard to find and hard to remember. I know that the everyday usage of commands such as these is different than methods in code so the matter of whether those are bad names is a different matter.

    Read the article

  • What is causing my newly built PC with Windows 7 to BSOD?

    - by Ben S
    I recently built my own PC from parts and installed Windows 7 and I have been getting BSODs with various different stop codes. The latest was 0x24, but I've also had 0xd1 and 0x1e. However, Windows does not let me know where the fault occurred, so I have no idea how to go about resolving this. I know the cause is likely a hardware driver, but I don't know which and cannot find information about how to use the minidumps to troubleshoot my problem. I've uploaded the last three minidumps in case someone can make sense out of them and let me know what could be causing my BSODs. Thanks, Ben

    Read the article

  • Tripple-Head Setup: Raedon 5770

    - by Aren B
    I currently own a Sapphire Raedon HD 5770 1GB w/ DDR5 (Link) I've got two LCDs set up in this configuration: +---------++------+ | || | | 1 || 2 | +---------++------+ Im looking into buying a new TV/Monitor, a Samsung T240HD (h**p://www.memoryexpress.com/Products/PID-MX22473(ME).aspx) and I'd like to set up a tripple monitor setup like this (new monitor being #3) +---------++------++---------+ | || || | | 3 || 2 || 1 | +---------++------++---------+ Monitor 1: DVI Monitor 2: DVI Monitor 3: HDMI PS3 - Monitor 3: HDMI2 Is this possible with my current video card? Can I plug in 2x DVI + 1x HDMI and get a third display? Or am I going to have to buy a slew of Display Port Adapters? I know older video cards you could only have 2 active displays, but I heard that barrier was defeated with the Display-Port series video cards.

    Read the article

  • RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean?

    - by Eric Z Goodnight
    They’re there, lurking in your image files. But have you ever wondered what are image channels are? And what do they have to do with RGB and CMYK? Here’s the answer. The channels panel in Photoshop is one of the most disused and misunderstood parts of the program. But images have color channels with or without Photoshop. Read on to find out what color channels are, what RGB and CMYK are, and learn a little bit more about how image files work Latest Features How-To Geek ETC How to Recover that Photo, Picture or File You Deleted Accidentally How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin The How-To Geek Video Guide to Using Windows 7 Speech Recognition How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop What is the Internet? From the Today Show January 1994 [Historical Video] Take Screenshots and Edit Them in Chrome and Iron Using Aviary Screen Capture Run Android 3.0 on a Hacked Nook Google Art Project Takes You Inside World Famous Museums Emerald Waves and Moody Skies Wallpaper Change Your MAC Address to Avoid Free Internet Restrictions

    Read the article

  • Move model forward base on model orientation

    - by ChocoMan
    My model rotates on it's own Y-axis regardless of where it is in the world. Here are the controls for the left ThumbStick: UP (move model forward on Z-Axis) DOWN (move model backward on Z-Axis) LEFT & RIGHT (strafe to either side) The problem is adjusting the direction the model's orientation UP and DOWN if the player should also rotate the player while moving forward or backwards. An example what Im trying to achieve would be a car doing donuts. The car is always facing the current direction that it interprets as forward (or rear as backwards) in relation to it's local rotation. Here is how Im calling the movement: // Rotate model with Right Thumbstick along X-Axis modelRotation -= pController.ThumbSticks.Right.X * mRotSpeed; // Move Forward if (pController.IsButtonDown(Buttons.LeftThumbstickUp)) { modelPosition.Z -= -pController.ThumbSticks.Left.Y * speed; } // Move Backward if (pController.IsButtonDown(Buttons.LeftThumbstickDown)) { modelPosition.Z += pController.ThumbSticks.Left.Y * speed; } // Strafe Left if (pController.IsButtonDown(Buttons.LeftThumbstickLeft)) { modelPosition.X += -pController.ThumbSticks.Left.X * speed; } // Strafe Right if (pController.IsButtonDown(Buttons.LeftThumbstickRight)) { modelPosition.X -= pController.ThumbSticks.Left.X * speed; } // DeadZone if (!pController.IsButtonDown(Buttons.LeftThumbstickUp) && !pController.IsButtonDown(Buttons.LeftThumbstickDown) && !pController.IsButtonDown(Buttons.LeftThumbstickLeft) && !pController.IsButtonDown(Buttons.LeftThumbstickRight)) { }

    Read the article

  • A new CAPTCHA using sentences?

    - by Xeoncross
    I was just thinking about how recaptcha is getting harder when I thought about another posible solution. Images won't last forever so we will need something else some day - like human logic or emotion. Google and others are trying grouping images by category (find the image that doesn't belong) but that requires a large amount of images and doesn't work for the blind. Anyway, what if a massive collection of text was gathered (public-domain books from each language) and a sentence was shown to the user with 1 (or 2) words that were a select box of choices? Only computers that knew correct English/Spanish/German grammar would be able to tell which of the words belonged in the sentence. Would there be any problems with this approach? I would assume that it would be easy enough for anyone that knew the language that the sentense was displayed in to figure out the answer easier than trying to read the reCAPTCHA text. Plus, storing an insane number of sentences would only take a couple gigabytes of space and wouldn't take anywhere near the CPU time creating images/audio takes. In other words, anyone could host their own captcha system with minimal impact on system performance. Is there a problem with this approach? More specifically I'm looking for the main problem with this approach. migrated from stackoverflow

    Read the article

  • Mailbox move issue from Exchange 2003 to Exchange 2010

    - by Ryan Roussel
    Today while moving mailboxes between Exchange 2003 and Exchange 2010, I hit an issue with a couple of mailboxes.  These mailboxes all popped access denied errors or more exactly: Insufficient Access Rights to perform the operation.   The cause was similar to the mail flow issue in that inheritable permissions were not turned on for the user object in Active Directory.  This also presented it’s own unique problem in that since the initial move request failed because of permissions, it had to be cleared before a new move request could be created. On top of that, the request did not show up in the EMC.  I used the following process to clear the request, assign permission, then create a new request:   1. First you need to know the ExchangeGUID of the mailbox for the remove-moverequest command.  To quickly get the GUID for a mailbox simply run:         2. Next we need to clear out the move request using PowerShell by running: [PS] c:\>Remove-moverequest -moverequestqueue "mailbox database 1030639620" -mailboxguid 8525686f-d4d3-42b7-92f1-46d77ea841a3   3. Then to re-establish inheritable permissions. This can be done by using AD Users and Computers, switching to View Advanced Features, then under the Security tab of the object.  Click Advanced, then check “allow inheritable permissions of parent to propagate to this object”   4. Once the Inheritable permissions are restored, we need to create a new move request: NOTE:  The EMC can also be used to initiate the Move Request once the permissions are corrected. [PS] c:\>New-moverequest –identity jyoung  -baditemlimit 100 -targetdatabase "mailbox database 1030639620"   And that’s it.  The mailbox should move over smoothly with no access denied error.

    Read the article

  • Restoring an Ubuntu Server using ZFS RAID-Z for data

    - by andybjackson
    Having become disillusioned with hacking Buffalo NAS devices, I've decided to roll my own Home server. After some research, I have settled on an HP Proliant Microserver with Ubuntu Server and ZFS (OS on 1 Ext4 disk, Data on 3 RAID-Z disks). As Joel Spolsky and Geoff Atwood say with regards to backup, I can't rest until I have done a restore in all of the failure scenarios that I am seeking to protect against. Q: How to configure Ubuntu Server to recognise a pre-existing RAID-Z array? Clearly if one of the data disks die - then that is a resilvering scenario, which is well documented. If two of the data disks die, then I am into regular backup/restore land. If the OS dies and I can restore, also an easy scenario. But if the OS dies and I can't restore, then I need to recreate an Ubuntu server. But how do I get this to recognise my RAID-Z array? Is the necessary configuration information stored within and across the RAID-Z array and simply need to be found (if so, how)? Or does it reside on the OS ext4 disk (in which case how do I recreate it)?

    Read the article

  • salted passwords confusion

    - by Vasiliy Stavenko
    I'm setting up email server for the first time and confused with strange thing. I have several user accounts which stored in previous server. Passwords for this accounts are in plain text. But I want to create crypts for them. Mysql (where my users will be stored) have function encrypt(passwd, salt). If no salt given used random value. I discovered that courier uses one certain salt and crypted all passwords with it. So the task done. But I'd like to know if there's a way to define my own salt for my pop3 server?

    Read the article

  • As an indie game dev, what processes are the best for soliciting feedback on my design/spec/idea? [closed]

    - by Jess Telford
    Background I have worked in a professional environment where the process usually goes like the following: Brain storm idea Solidify the game mechanics / design Iterate on design/idea to create a more solid experience Spec out the details of the design/idea Build it Step 3. is generally done with the stakeholders of the game (developers, designers, investors, publishers, etc) to reach an 'agreement' which meets the goals of all involved. Due to this process involving a series of often opposing and unique view points, creative solutions can surface through discussion / iteration. This is backed up by a process for collating the changes / new ideas, as well as structured time for discussion. As a (now) indie developer, I have to play the role of all the stakeholders (developers, designers, investors, publishers, etc), and often find myself too close to the idea / design to do more than minor changes, which I feel to be local maxima when it comes to the best result (I'm looking for the global maxima, of course). I have read that ideas / game designs / unique mechanics are merely multipliers of execution, and that keeping them secret is just silly. In sharing the idea with others outside the realm of my own thinking, I hope to replicate the influence other stakeholders have. I am struggling with the collation of changes / new ideas, and any kind of structured method of receiving feedback. My question: As an indie game developer, how and where can I share my ideas/designs to receive meaningful / constructive feedback? How can I successfully collate the feedback into a new iteration of the design? Are there any specialized websites, etc?

    Read the article

  • How do I download photos tagged of me from Facebook?

    - by Keith
    I want to be able to download (and back up) photos tagged of me in Facebook. I'm specifically not interested in my own photo albums - I uploaded them and therefore have them already in better quality than FB. What I want are the photos other have uploaded that have me in them. I have a couple of hundred of these now, and don't much fancy three hours of right-click save-as... There seem to be a couple of utilities that pop up with a quick search, but (call me paranoid) I'm wary about giving some random freeware app my login and password. Social safe looked promising, but as it doesn't support this feature at the moment it's kinda pointless. Can anyone recommend one that they've actually used? I'd consider an open source one - I'm a programmer and don't mind digging through to check that it doesn't do anything nasty.

    Read the article

  • Distinction between API and frontend-backend

    - by Jason
    I'm trying to write a "standard" business web site. By "standard", I mean this site runs the usual HTML5, CSS and Javascript for the front-end, a back-end (to process stuff), and runs MySQL for the database. It's a basic CRUD site: the front-end just makes pretty whatever the database has in store; the backend writes to the database whatever the user enters and does some processing. Just like most sites out there. In creating my Github repositories to begin coding, I've realized I don't understand the distinction between the front-end back-end, and the API. Another way of phrasing my question is: where does the API come into this picture? I'm going to list some more details and then questions I have - hopefully this gives you guys a better idea of what my actual question is, because I'm so confused that I don't know the specific question to ask. Some more details: I'd like to try the Model-View-Controller pattern. I don't know if this changes the question/answer. The API will be RESTful I'd like my back-end to use my own API instead of allowing the back-end to cheat and call special queries. I think this style is more consistent. My questions: Does the front-end call the back-end which calls the API? Or does the front-end just call the API instead of calling the back-end? Does the back-end just execute an API and the API returns control to the back-end (where the back-end acts as the ultimate controller, delegating tasks)? Long and detailed answers explaining the role of the API alongside the front-end back-end are encouraged. If the answer depends on the model of programming (models other than the Model-View-Controller pattern), please describe these other ways of thinking of the API. Thanks. I'm very confused.

    Read the article

  • Execute background program in bash without job control

    - by Wu Yongzheng
    I often execute GUI programs, such as firefox and evince from shell. If I type "firefox &", firefox is considered as a bash job, so "fg" will bring it to foreground and "hang" the shell. This becomes annoying when I have some background jobs such as vim already running. What I want is to launch firefox and dis-associate it with bash. Consider the following ideal case with my imaginary runbg: $ vim foo.tex ctrl+z and vim is job 1 $ pdflatex foo $ runbg evince foo.pdf evince runs in background and I get me bash prompt back $ fg vim goes foreground Is there any way to do this using existing program? If no, I will write my own runbg.

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >