Search Results

Search found 7864 results on 315 pages for 'pre commit hook'.

Page 62/315 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • Upgrade fails because of blcr-dkms

    - by Peter Smit
    When I try to update my Ubuntu 10.04 installation to 10.10 I get the following error. Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade: Trying to install blacklisted version 'blcr-dkms_0.8.2-13' This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug against the 'update-manager' package and include the files in /var/log/dist-upgrade/ in the bug report. None of the three applies to me (I think). Now I found that this also affects someone else: https://bugs.launchpad.net/update-manager/+bug/657662 Is there here someone who knows what could be wrong? Or a workaround so that I can install Maverick Meerkat?

    Read the article

  • Technically speaking, what is different about Ubuntu compared to other Linux distributions?

    - by Ross
    This is a question that's puzzled me for quite a while (and refers to the differences between all distributions). In my mind, a distribution is: a pre-configured OS, with some pre-installed packages, some created by the distribution's community that are unique to that distribution (e.g. apt-get). I'm not sure my definition is right as I feel there's something else. I'm really interested in setting up my own ArchLinux distro (which starts as a very minimal barebones system that you expand yourself) but feel I need to understand this first.

    Read the article

  • A new method of supporting FOSS?

    - by James
    I have been kicking an idea around for sometime and wondered if something of it's nature hadn't already been invented. The premise is a website that integrates code management, project/team management, and micro-transactions. Donations, in and of themselves, are a sporadic, and unreliable method of supporting developers. Furthermore most free software that accepts donations is started by programmers ,be it to learn, because of a hobby, or because they saw a niche that needed to be filled. There is no method in place of of saying "hay, the FOSS community needs this kind of software, will someone develop it, and accept donations!?" Programmers should be programming, not busy begging for money. Basically the idea is people can go to the site in question, and start a project or make a request. Anyone signed up with the site can start a request. Each member account is free to support or "upvote" a project request. Requests and the associated number of votes let programmers in the community know the needs of the community. When a project is started a request for developers can be put forth. Developers have a ranking based on commits to other projects. The project founder can send invites to known Developers, or accept invites from members based on developer ranking. Once the project has at least one team-member, an objectives sheet or "draft" can be put out, listing design, goals, and features. The founding member and each team-member may contribute to this sheet. Each "milestone", or "Feature" is represented by an article. An article is any unit of a draft that can be voted on by The Project Founder, Team-members, and contributors...which brings me to the next half of this idea. --Microtransactions-- People signed up with this hypothetical website can purchase credits which then can be transfered to projects they would like to support. Anyone who transfers credits to a project is known as a contributor to that project. At anytime a Founder, or the lead team-member may submit an article, or a design (multiple articles) for consideration. All team-members, as well as the Founder, can vote once for each article freely. Contributors may vote yes or no on a number of articles (independent of any given meeting where a particular design or article is considered) equal to the number of credits they have placed into a contributors fund for that particular project. A contributors fund is a proxy between a sites credit account, and a projects credit account. It is sort of like a promise to contribute, instead of an actual contribution. Contributers may place constraints on particular articles such that if those constraints (a yes or no vote) are satisfied then a manually specified amount of credits is automatically transfered to the project account. This allows a project to develop based on the needs of those who may (in the future) financially rely on the project. --- Code commits & milestones --- When a team-member makes a commit, they may specify if it's a minor commit, a bug fix, a compatibility patch (i.e. for a new platform), or a milestone (an article voted on previously). People signed up with the website, may download the updated project and test it to see if the programmer's assertion is true about the commit. A report may then be filed on a small form, giving a one or two paragraphs, and a positive or negative confirmation of the programmer's goal for that particular commit. After all milestones for a particular draft are complete, a new draft is submitted for voting. Also funds may withdrawn by each team-member based on the proportion of commits and milestones confirmed (fulfilled the stated purpose) for each programmer. --- voting --- Members, contributor, and non-contributor, may make priority requests for particular articles of a draft. The project founder may or may not opt to fill those requests based on the volume of upvotes. A fulfilled priority request means that any team-member that makes a community-confirmed commit for an article is, when all articles for the draft are fulfilled, granted a portion of project credits in proportion to the average priority of all the articles he committed. ---- Notes --- While this is horribly prone to design-by-committee the one saving grace is that the lead team-member may place constraints on a draft such that some, or ALL articles must be voted yes. Commits may not begin until a draft satisfying said constraints is approved. What does SO think, is this idea feasible? Does anyone see major problems with this? Is there any insights, or improvements that could be made?

    Read the article

  • How can I create tiles that scale to multiple resolutions?

    - by Darestium
    I am trying to create a multiplayer version of the popular Flash game N in Java. However, I'm not sure how to create a tileset that will scale up. Are the tiles for N pre-drawn or are they defined with mathamatical formulas in code? I do see how they would scale up in Flash if they were pre-rendered. So if anyone has any ideas how I should go about creating the tileset, or how they are created in the game please let me know. You can check out the game here.

    Read the article

  • Webcast: Introduction To Causal Factors

    - by ChristineS-Oracle
    Webcast: Introduction To Causal Factors Date: June 11, 2014 at 11:00 am ET, 10:00 am CT, 9:00 am MT, 8:00 am PT, 8:30 pm, India Time (Mumbai, GMT+05:30) This one hour advisor webcast will provide an introduction to causal factors for Demand Management and AFDM. Pre-seeded causal factors will be discussed as well as when they are not appropriate. Scenarios of when to add causal factors will be covered and best practice method of adding and using. Topics will include: Causal factors in DM and AFDM Pre-seeded causal factors When to modify causal factor settings Best practice when working with causal factors Details & Registration: Doc ID 1664606.1

    Read the article

  • Developing an online email service [closed]

    - by Richard Stokes
    I am interested in developing an online email service (e.g. Gmail, Hotmail, but on a much smaller scale) allowing people to sign up for free email addresses on my domain. The domain in question is already purchased, but I have no idea how to even start. I was hoping to code this using a Ruby framework such as Rails or Sinatra. Firstly, are there any libraries/pre-made solutions to this problem that exist already that would be easy enough to just plug-in to my own site? Secondly, if there are no real pre-made solutions, what are the general steps I need to take to accomplish this task?

    Read the article

  • Top Questions and Answers for Pluging into Oracle Database as a Service

    - by David Swanger
    Yesterday we hosted a comprehensive online forum that shared a comprehensive path to help your organization design, deploy, and deliver a Database as a Service cloud. If you missed the online forum, you can watch it on demand by registering here. We received numerous questions.  Below are highlights of the most informative: DBaaS requires a lengthy and careful design efforts. What is the minimum requirements of setting up a scaled-down environment and test it out? You should have an OEM 12c environment for DBaaS administration and then a target database deployment platform that has the key characteristics of what your production environment will look like. This could be a single server or it could be a small pool of hosts if your production DBaaS will be larger and you want to test a more robust / real world configuration with Zones and Pools or DR capabilities for example. How does this benefit companies having their own data center? This allows companies to transform their internal IT to a service delivery model for the database. The benefits to the company are significant cost savings, improved business agility and reduced risk. The benefits to the consumers (internal) of services if much fast provisioning, and response to change in business requirements. From a deployment perspective, is DBaaS's job solely DBA's job? The best deployment model enables the DBA (or end-user) to control the entire process. All resources required to deploy the service are pre-provisioned, and there are no external dependencies (on network, storage, sysadmins teams). The service is created either via a self-service portal or by the DBA. The purpose of self service seems to be that the end user does not rely on the DBA. I just need to give him a template. He decides how much AMM he needs. Why shall I set it one by one. That doesn't seem to be the purpose of self service. Most customers we have worked with define a standardized service catalog, with a few (2 to 5) different classes of service. For each of these classes, there is a pre-defined deployment template, and the user has the ability to select from some pre-defined service sizes. The administrator only has to create this catalog once. Each user then simply selects from the options offered in the catalog.  Looking at DBaaS service definition, it seems to be no different from a service definition provided by a well defined DBA team. Why do you attribute it to DBaaS? There are a couple of perspectives. First, some organizations might already be operating with a high level of standardization and a higher level of maturity from an ITIL or Service Management perspective. Their journey to DBaaS could be shorter and their Service Definition will evolve less but they still might need to add capabilities such as Self Service and Metering/Chargeback. Other organizations are still operating in highly siloed environments with little automation and their formal Service Definition (if they have one) will be a lot less mature today. Therefore their future state DBaaS will look a lot different from their current state, as will their Service Definition. How database as a service impact or help with "Click to Compute" or deploying "Database in cloud infrastructure" DBaaS enables Click to Compute. Oracle DBaaS can be implemented using three architecture models: Oracle Multitenant 12c, native consolidation using Oracle Database and consolidation using virtualization in infrastructure cloud. As Deploy session showed, you get higher consolidating density and efficiency using Multitenant and higher isolation using infrastructure cloud. Depending upon your business needs, DBaaS can be implemented using any of these models. How exactly is the DBaaS different from the traditional db? Storage/OS/DB all together to 'transparently' provide service to applications? Will there be across-databases access by application/user. Some key differences are: 1) The services run on a shared platform. 2) The services can be rapidly provisioned (< 15 minutes). 3) The services are dynamic and can be relocated, grown, shrunk as needed to meet business needs without disruption and rapidly. 4) The user is able to provision the services directly from a standardized service catalog.. With 24x7x365 databases its difficult to find off peak hrs to do basic admin tasks such as gathering stats, running backups, batch jobs. How does pluggable database handle this and different needs/patching downtime of apps databases might be serving? You can gather stats in Oracle Multitenant the same way you had been in regular databases. Regarding patching/upgrading, Oracle Multitenant makes patch/upgrade very efficient in that you can pre-provision a new version/patched multitenant db in a different ORACLE_HOME and then unplug a PDB from its CDB and plug it into the newer/patched CDB in seconds.  Thanks for all the great questions!  If you'd like to learn more and missed the online forum, you can watch it on demand here.

    Read the article

  • Sporadic disk clicking sound

    - by Abdó
    Hi, I'm having some unusual and sporadic hard disk clicking issues. Here is a cronological description of the facts. I'm using an ASUS P6T-SE with Intel Core i7, 6Gb RAM 600W Power supply and ATI4670 graphics, running Ubuntu 10.10. About one month ago my hard disk (SATA II Seagate Barracuda 1Tb 7200 rpm) started making a clicking sound: a sort of loud tic-tac, every second or so, when involved in disk activity. The system was clearly slower than before at disk access, but it was functional and I could not find any signal of trouble on the linux logs. I disconnected the disk and tried an older SATA drive I had around: no problem with it. Then I reconnected the Seagate disk, and the problem was mysteriously gone. Ubuntu booted normally, usual speed, no clicking. A couple of weeks later, the problem reappeared. I tried disconnecting reconnecting (as it somehow solved the problem before) without luck. So, despite it was a rather new drive, I assumed it was a hardware issue, made backups and bought a new drive. The new drive is a SATA II Seagate Barracuda 1.5 Tb 7200 rpm. I installed both drives at the same time, with the intention of transferring my files from on to the other. To my surprise, when I booted the computer with both drives, both started making the clicking sound !! Even worse, I removed the old drive, leaving the unformated new drive connected, and booted from a LiveCD. It kept clicking ! Puzzled by this, I tried both drives on my laptop with a SATA to USB cable. At the moment I connected any of them, they made one or two unusual clicks and immediately stopped doing that and worked normally. The old drive I thought almost dead, was working like a charm as if nothing happened. Then I thought: "ok, it must be the motherboard. Let's try again". So, I reconnected the old drive to the ASUS P6T motherboard (the same cables and SATA port as before), and it worked as if nothing happened ! The problem was gone again. The new 1.5 Tb drive was also working ok: No clicking nor slowdown. So I left the old 1Tb disk connected and kept using the computer daily during 3 weeks, until today it happened again. Now I don't really know what to do or check. I'm not even sure if it is a hardware issue any more ! This is rather annoying as it seems it happens with a period of 2 or 3 weeks and I have no means of forcing it to happen. Does anyone have a clue of what can causes this behaviour or have any suggestions of things I should check when it happens again ? What I did today is checking some SMART parameters Error log: smartctl -l error /dev/sda. No errors Short selftest: smartctl -t short /dev/sda. No errors Disk Health check: smartctl -H /dev/sda. passed And here are the vendor specific parameters (smartctl -A /dev/sda) Which I'm not quite sure how to interpret. === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 120 099 006 Pre-fail Always - 235962588 3 Spin_Up_Time 0x0003 095 095 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 187 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 072 060 030 Pre-fail Always - 16348045 9 Power_On_Hours 0x0032 096 096 000 Old_age Always - 3590 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 94 183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 097 000 Old_age Always - 4295164029 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 070 057 045 Old_age Always - 30 (Lifetime Min/Max 19/31) 194 Temperature_Celsius 0x0022 030 043 000 Old_age Always - 30 (0 18 0 0) 195 Hardware_ECC_Recovered 0x001a 037 026 000 Old_age Always - 235962588 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 73950746906346 241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 1832967731 242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 3294986902 Any clue to this mystery will be really welcome. Thank you very much !!

    Read the article

  • No boot loader found - dual booting Windows 8 with Ubuntu 14.04

    - by Sriram
    I have been trying in vain to dual boot my computer with Windows 8 Pro (pre-installed) and Ubuntu 14.04 64-bit. I have been able to successfully install Ubuntu 14.04, but the option to start Ubuntu does not appear on startup. This is after having taken all steps as mentioned in Installing Ubuntu on a Pre-Installed Windows 8 (64-bit) System (UEFI Supported). I even tried the boot repair option and ended up with this error log. My questions are: How do I solve for No boot loaders found in /dev/...? Are there any other recommendations that will help me solve this? Other points that may be important: Booting into Ubuntu from a live USB shows all Ubuntu partitions on the hard drive.

    Read the article

  • Oracle Database Appliance Technical Boot Camp

    - by mseika
    Oracle Database Appliance Technical Boot Camp Wednesday 19th September 9.30 – 16.30 This session is designed to give our partners detailed sales and technical information to familiarise themselves with the Oracle Database Appliance. It is split into two sessions, the first aimed at sales and pre-sales technical support, and the second aimed at pre-sales and technical implementation staff. The agenda is as follows: Part 1 Oracle Engineered Systems Introducing the Oracle Database Appliance What is the target market? Competitive positioning Sales Plays Up sell opportunities Resell requirements and process Part 2 Hardware internals Download the appliance software kit Disabling / enabling cores Configuration and setup Oracle 11g R2 overview Backup strategies Please register here.

    Read the article

  • Caching items in Orchard

    - by Bertrand Le Roy
    Orchard has its own caching API that while built on top of ASP.NET's caching feature adds a couple of interesting twists. In addition to its usual work, the Orchard cache API must transparently separate the cache entries by tenant but beyond that, it does offer a more modern API. Here's for example how I'm using the API in the new version of my Favicon module: _cacheManager.Get( "Vandelay.Favicon.Url", ctx => { ctx.Monitor(_signals.When("Vandelay.Favicon.Changed")); var faviconSettings = ...; return faviconSettings.FaviconUrl; }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } There is no need for any code to test for the existence of the cache entry or to later fill that entry. Seriously, how many times have you written code like this: var faviconUrl = (string)cache["Vandelay.Favicon.Url"]; if (faviconUrl == null) { faviconUrl = ...; cache.Add("Vandelay.Favicon.Url", faviconUrl, ...); } Orchard's cache API takes that control flow and internalizes it into the API so that you never have to write it again. Notice how even casting the object from the cache is no longer necessary as the type can be inferred from the return type of the Lambda. The Lambda itself is of course only hit when the cache entry is not found. In addition to fetching the object we're looking for, it also sets up the dependencies to monitor. You can monitor anything that implements IVolatileToken. Here, we are monitoring a specific signal ("Vandelay.Favicon.Changed") that can be triggered by other parts of the application like so: _signals.Trigger("Vandelay.Favicon.Changed"); In other words, you don't explicitly expire the cache entry. Instead, something happens that triggers the expiration. Other implementations of IVolatileToken include absolute expiration or monitoring of the files under a virtual path, but you can also come up with your own.

    Read the article

  • Oracle Virtualization Friday Spotlight - November 8, 2013

    - by Monica Kumar
    Hands-on Private Cloud Simulator In One Hour Submitted by: Doan Nguyen, Senior Principal Product Marketing Director My aeronautics instructor used to say, "you can’t appreciate flying until you take flight." To clarify, this is not about gearing up in a flying squirrel suit and hopping off a cliff (topic for another blog!) but rather about flying an airplane. The idea is to get hands-on with the controls at the cockpit and experience flight before you actually fly a real plane. After the initial 40 hours of flight time, the concept sank in and it really made sense.This concept is what inspired our technical experts to put together the hands-on lab for a private cloud deployment and management self-service model. Yes, we are comparing the lab to a flight simulator! Let’s look at the parallels: To get trained to fly, starting in the simulator gets you off the ground quicker. There is no need to have a real plane to begin with. In a hands-on lab, there is no need for a real server, with networking and real storage installed. All you need is your laptop The simulator is pre-configured, pre-flight check done. Similarly, in a hands-on lab, Oracle VM and Oracle Enterprise Manager are pre-configured and assembled using Oracle VM VirtualBox as the container. Software installations are not needed. After time spent training at the controls, you can really appreciate the practical experience of flying. Along the same lines, the hands-on lab is a guided learning path, without the encumbrances of hardware, software installation, so you can learn about cloud deployment and management.  However, unlike the simulator training, your time investment with the lab is only about an hour and not 40 hours! This hands-on lab takes you through private cloud deployment and management using Oracle VM and  Oracle Enterprise Manager Cloud Control 12c in an Infrastructure as a service IaaS model. You will first configure the IaaS cloud as the cloud administrator and then deploy guest virtual machines (VMs) as a self-service user. Then you are ready to take flight into the cloud! Why not step into the cockpit now!

    Read the article

  • Using IF statements to find string length in array for alignment (Visual Basic)

    - by Brodoin
    My question is just as it says in the title. How would one use IF statements to find the string-length of content in an array, and then make it so that they show up in a Rich Text Box with the left sides aligned? Noting that one value in my array is a Decimal. Imports System.IO Imports System.Convert Public Class frmAll 'Declare Streamreader Private objReader As StreamReader 'Declare arrays to hold the information Private strNumber(24) As String Private strName(24) As String Private strSize(24) As String Private decCost(24) As Integer Private Sub frmAll_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load 'Set objReader objReader = New StreamReader("products.csv") 'Call the FillArray sub to fill the array Call FillArray() End Sub Private Sub FillArray() 'Declare variables and arrays Dim decCost(24, 1) As Decimal Dim strFields() As String Dim strRec As String Dim intCount As Integer = 0 Dim chrdelim As Char = ToChar(",") 'Set strRec to read the lines strRec = objReader.ReadLine 'Do while loop to fill array. Do While strRec <> Nothing strFields = strRec.Split(chrdelim) strNumber(intCount) = strFields(0) strName(intCount) = strFields(1) strSize(intCount) = strFields(2) decCost(intCount, 0) = ToDecimal(strFields(3)) decCost(intCount, 1) = ToDecimal(strFields(4)) 'Set strRec to read the lines again strRec = objReader.ReadLine 'increment the index intCount += 1 Loop 'Call the Calculate sub for calculation Call Calculate(decCost) End Sub Private Sub Calculate(ByVal numIn(,) As Decimal) 'Define arrays to hold total cost Dim decRowTotal(24) As Decimal 'Define variables to hold the counters for rows and columns Dim intR As Integer Dim intC As Integer 'Calcualte total cost For intC = 0 To 1 For intR = 0 To 24 decRowTotal(intR) += numIn(intR, intC) * 1 Next Next 'Call the Output sub to configure the output. Call Output(numIn, decRowTotal) End Sub Private Sub Output(ByVal NumIn(,) As Decimal, _ ByVal RowTotalIn() As Decimal) 'Variables Dim strOut As String Dim intR As Integer = 0 Dim intC As Integer = 0 'Set header for output. strOut = "ID" & vbTab & "Item" & vbTab & vbTab & vbTab & "Size" & _ vbTab & vbTab & vbTab & vbTab & "Total Price" & _ vbCrLf & "---------- ... -------------------------" & vbCrLf 'For loop to add each line to strOut, setting 'the RowTotalIn to currency. For intC = 0 To 24 strOut &= strNumber(intC) & vbTab strOut &= strName(intC) & vbTab strOut &= strSize(intC) & vbTab strOut &= RowTotalIn(intC).ToString("c") & vbCrLf Next 'Add strOut to rbtAll rtbAll.Text = strOut End Sub End Class Output It shows up with vbTabs in my output, but still, it looks similar in that they are not aligned. The first two do, but after that they are not, and I am totally lost. P0001 Coffee - Colombian Supreme 24/Case: Pre-Ground 1.75 Oz Bags $16.50 P0002 Coffee - Hazelnut 24/Case: Pre-Ground 1.75 Oz Bags $24.00 P0003 Coffee - Mild Blend 24/Case: Pre-Ground 1.75 Oz Bags $20.50 P0004 Coffee - Assorted Flavors 18/Case. Pre-Ground 1.75 Oz Bags $23.50 P0005 Coffee - Decaf 24/Case: Pre-Ground 1.75 Oz Bags $20.50

    Read the article

  • WebLogic Silent Install 11.1.1.4 (WLS 10.3.4)

    - by john.graves(at)oracle.com
    This is just a quick note to remind myself of how incredibly easy it is to install the base products without the aid of a mouse! Note to Windoze users: Why?!?!  I’m only showing Linux examples in this blog so I encourage you to just say NO to win-no-z  install.sh !/bin/bash ./wls1034_oepe111161_linux32.bin -mode=silent -silent_xml=./silent.xml silent.xml <?xml version="1.0" encoding="UTF-8"?> <bea-installer> <input-fields> <data-value name="BEAHOME" value="/opt/app/wls10.3.4" /> <data-value name="WLS_INSTALL_DIR" value="/opt/app/wls10.3.4/wlserver_10.3" /> </input-fields> </bea-installer> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Note about Oracle_Home: Since all products are moving to a common WLS base, I simply use the WLS version as my Oracle Home.  In this case wls10.3.4.  Also, I keep my user_projects outside my Oracle_Home directory to keep things clean.  I typically use /opt/app/user_projects or a variation of that.

    Read the article

  • How should I store a Game Database on Android?

    - by Liam
    I'm looking at creating a game for Android and while I have most of the ins and outs worked out, the one thing I'm struggling with is how to store data for the game. Ultimately, the game will be based off of a lot of pre-defined data and statistics so the obvious choice to me would be something like SQLite, but as I'm pretty new to the realm of Android and Game Development, I'm not 100% certain if this is the right route to follow. The data will be general pre-defined data as well as player data (along the lines of careers stats - what place finished, etc). I was wondering if there was a better/best practice solution that wasn't SQLite and that would provide said functionality and if so, could you point me in the right direction?

    Read the article

  • JOB OF THE WEEK

    - by Tim Koekkoek
    ERP Pre-Sales Consultant - Malaga The job as a ERP Pre-Sales consultant is challenging and diverse and you will be working in a multinational environment in our EMEA Presales Centre in the vibrant city of Malaga. Frequent possibilities to support opportunities in various industries and countries will give you an excellent insight into customer business needs and market trends. You will support the ERP Presales organisation for the Benelux, Germany, UK and Spain (depending on your language) and be trained in the Oracle ERP product portfolio as well as the Presales role. If you are interested in this position, read more here! For all of our other vacancies and internships, please visit https://campus.oracle.com.

    Read the article

  • Dell M6800 with Windos 8.1 on part-1, cannot run if Ubuntu and Fedora are 2nd and 3rd OSsen?

    - by user289334
    On a Dell M6800 machine that has Windows 8 pre-installed, I upgraded to 8.1, then I loaded Ubuntu 14.04 then Fedora 20 the only way I could, that is in "Legacy" BIOS Mode. BTW the Ubuntu install was unable to complete but Fedora did, and left all working, 2 bootable OSsen, with Grub2 from Fedora doing the boots, but Windows 8.1 is now "invisible". I have run Boot-Repair but info is not useful. It tells me to switch to UEFI which, on M6800, doesn't work with this (too long-winded to explain why here). I need to have the Grub2 configs "see" the original Windows 8.1 partition, with BIOS switched to "Legacy". BTW various posts have said make switch to UEFI to boot from, say, USB-stick or DVD; this is wrong, you can't - UEFI mode only allows boot from the Windows partition, which it says has a Windows 8.1 on it, which doesn't boot. Basically, if you have actually succeeded in loading Ubuntu 14.04 or Fedora 20 on an M6800, which comes with pre-loaded Windows 8, you will know how I fix this.

    Read the article

  • Replacing objects, handling clones, dealing with write logs

    - by Alix
    Hi everyone, I'm dealing with a problem I can't figure out how to solve, and I'd love to hear some suggestions. [NOTE: I realise I'm asking several questions; however, answers need to take into account all of the issues, so I cannot split this into several questions] Here's the deal: I'm implementing a system that underlies user applications and that protect shared objects from concurrent accesses. The application programmer (whose application will run on top of my system) defines such shared objects like this: public class MyAtomicObject { // These are just examples of fields you may want to have in your class. public virtual int x { get; set; } public virtual List<int> list { get; set; } public virtual MyClassA objA { get; set; } public virtual MyClassB objB { get; set; } } As you can see they declare the fields of their class as auto-generated properties (auto-generated means they don't need to implement get and set). This is so that I can go in and extend their class and implement each get and set myself in order to handle possible concurrent accesses, etc. This is all well and good, but now it starts to get ugly: the application threads run transactions, like this: The thread signals it's starting a transaction. This means we now need to monitor its accesses to the fields of the atomic objects. The thread runs its code, possibly accessing fields for reading or writing. If there are accesses for writing, we'll hide them from the other transactions (other threads), and only make them visible in step 3. This is because the transaction may fail and have to roll back (undo) its updates, and in that case we don't want other threads to see its "dirty" data. The thread signals it wants to commit the transaction. If the commit is successful, the updates it made will now become visible to everyone else. Otherwise, the transaction will abort, the updates will remain invisible, and no one will ever know the transaction was there. So basically the concept of transaction is a series of accesses that appear to have happened atomically, that is, all at the same time, in the same instant, which would be the moment of successful commit. (This is as opposed to its updates becoming visible as it makes them) In order to hide the write accesses in step 2, I clone the accessed field (let's say it's the field list) and put it in the transaction's write log. After that, any time the transaction accesses list, it will actually be accessing the clone in its write log, and not the global copy everyone else sees. Like this, any changes it makes will be done to the (invisible) clone, not to the global copy. If in step 3 the commit is successful, the transaction should replace the global copy with the updated list it has in its write log, and then the changes become visible for everyone else at once. It would be something like this: myAtomicObject.list = updatedCloneOfListInTheWriteLog; Problem #1: possible references to the list. Let's say someone puts a reference to the global list in a dictionary. When I do... myAtomicObject.list = updatedCloneOfListInTheWriteLog; ...I'm just replacing the reference in the field list, but not the real object (I'm not overwriting the data), so in the dictionary we'll still have a reference to the old version of the list. A possible solution would be to overwrite the data (in the case of a list, empty the global list and add all the elements of the clone). More generically, I would need to copy the fields of one list to the other. I can do this with reflection, but that's not very pretty. Is there any other way to do it? Problem #2: even if problem #1 is solved, I still have a similar problem with the clone: the application programmer doesn't know I'm giving him a clone and not the global copy. What if he puts the clone in a dictionary? Then at commit there will be some references to the global copy and some to the clone, when in truth they should all point to the same object. I thought about providing a wrapper object that contains both the cloned list and a pointer to the global copy, but the programmer doesn't know about this wrapper, so they're not going to use the pointer at all. The wrapper would be like this: public class Wrapper<T> : T { // This would be the pointer to the global copy. The local data is contained in whatever fields the wrapper inherits from T. private T thisPtr; } I do need this wrapper for comparisons: if I have a dictionary that has an entry with the global copy as key, if I look it up with the clone, like this: dictionary[updatedCloneOfListInTheWriteLog] I need it to return the entry, that is, to think that updatedCloneOfListInTheWriteLog and the global copy are the same thing. For this, I can just override Equals, GetHashCode, operator== and operator!=, no problem. However I still don't know how to solve the case in which the programmer unknowingly inserts a reference to the clone in a dictionary. Problem #3: the wrapper must extend the class of the object it wraps (if it's wrapping MyClassA, it must extend MyClassA) so that it's accepted wherever an object of that class (MyClass) would be accepted. However, that class (MyClassA) may be final. This is pretty horrible :$. Any suggestions? I don't need to use a wrapper, anything you can think of is fine. What I cannot change is the write log (I need to have a write log) and the fact that the programmer doesn't know about the clone. I hope I've made some sense. Feel free to ask for more info if something needs some clearing up. Thanks so much!

    Read the article

  • php file upload problem [closed]

    - by newcomer
    This code works properly in my localhost. I am using xampp 1.7.3. but when I put it in the live server it shows Possible file upload attack!. 'upload/' is the folder under 'public_html' folder on the server. I can upload files via other script in that directory. <?php $uploaddir = '/upload/';//I used C:/xampp/htdocs/upload/ in localhost. is it correct here? $uploadfile = $uploaddir . basename($_FILES['file_0']['name']); echo '<pre>'; if (move_uploaded_file($_FILES['file_0']['tmp_name'], $uploadfile)) { echo "File is valid, and was successfully uploaded.\\n"; } else { echo "Possible file upload attack!\\n"; } echo 'Here is some more debugging info:'; print_r($_FILES); print "</pre>"; ?>

    Read the article

  • Melhoria de Performance no .NET 4.5: Multicore Just-in-Time (JIT).

    - by anobre
    Olá pessoal! Dando uma lida nas melhorias de performance da plataforma .NET 4.5, me deparei com algo extremamente interessante: Multicore Just-in-Time (JIT). A teoria é muito simples: por que não utilizar vários núcleos para a compilação JIT? Além disto, será que seria possível compilar os métodos em uma determinada ordem, onde os primeiros fossem aqueles com maior probabilidade de execução? Isto parece meio loucura mas é o que o Multicore Just-in-Time (JIT) faz. E o melhor de tudo, de uma forma extremamente simples. As aplicações ASP.NET 4.5 já o fazem por default. Em outras ocasiões, basta executar duas linhas de código: uma indicando a pasta onde o arquivo que armazenará o profile ficará, e a outra para iniciar o procedimento. Este profile é o arquivo responsável por armazenar a ordem de compilação dos métodos, para que aqueles com maior chance de serem executados mais cedo sejam compilados antes. Código para este processo: ProfileOptimization.SetProfileRoot(@"C:\ProfileRoot"); ProfileOptimization.StartProfile("profile"); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Esta otimização na compilação só será notada após a criação do profile. Portanto, na primeira vez nada será percebido. Ao final do processo, um arquivo com o nome escolhido (no caso profile) será criado, na pasta indicada como root: Fica a dica! Abraços!

    Read the article

  • VirtualBox 4.0?????&Oracle DB???????

    - by Yusuke.Yamamoto
    ??????????????????????? Oracle VM VirtualBox 4.0 ??????????? ????????????????????? ??????????????????????????????????????????????????? Oracle Unveils Oracle VM VirtualBox 4.0(??) ??????PC????Oracle VM VirtualBox?????????????? - ??? ??????????VirtualBox ????????????????????????Oracle VM VirtualBox Pre-built Appliances?????????? ?????????????VirtualBox ??????????????????????????????????????PC?????????????????????? ?????Developer Days Appliance????Oracle Enterprise Linux 5 ?? Oracle Database 11g Release 2 / TimesTen In-Memory Database Cache ????????????? ?????????????????????? Oracle VM VirtualBox Pre-built Appliances Database App Development VM Appliance aka Oracle VM VirtualBox Appliance - wmo6hash::blog Oracle VM VirtualBox??? - @IT

    Read the article

  • Using extension methods to decrease the surface area of a C# interface

    - by brian_ritchie
    An interface defines a contract to be implemented by one or more classes.  One of the keys to a well-designed interface is defining a very specific range of functionality. The profile of the interface should be limited to a single purpose & should have the minimum methods required to implement this functionality.  Keeping the interface tight will keep those implementing the interface from getting lazy & not implementing it properly.  I've seen too many overly broad interfaces that aren't fully implemented by developers.  Instead, they just throw a NotImplementedException for the method they didn't implement. One way to help with this issue, is by using extension methods to move overloaded method definitions outside of the interface. Consider the following example: .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: public interface IFileTransfer 2: { 3: void SendFile(Stream stream, Uri destination); 4: } 5:   6: public static class IFileTransferExtension 7: { 8: public static void SendFile(this IFileTransfer transfer, 9: string Filename, Uri destination) 10: { 11: using (var fs = File.OpenRead(Filename)) 12: { 13: transfer.SendFile(fs, destination); 14: } 15: } 16: } 17:   18: public static class TestIFileTransfer 19: { 20: static void Main() 21: { 22: IFileTransfer transfer = new FTPFileTransfer("user", "pass"); 23: transfer.SendFile(filename, new Uri("ftp://ftp.test.com")); 24: } 25: } In this example, you may have a number of overloads that uses different mechanisms for specifying the source file. The great part is, you don't need to implement these methods on each of your derived classes.  This gives you a better interface and better code reuse.

    Read the article

  • Which game library/engine to choose?

    - by AllTheThingsSheSaid
    I'm not a programming beginner at all. I've tried 2 libraries/engines so far, allegro and Unity. For allegro. i think its not good if i wanna make a career in gaming industry since its not powerful enough. I don't feel comfortable with unity. Its more like a software like photoshop or flash. You can do almost everything with pre-defined functions, tools and with less coding. I need something which offers less tools and pre defined functions and more coding work. It would be awesome if its free and c/c++ based. I need both 2D/3D. and please, don't tell me to make my own library, i am not "that" advance. Any other information about gaming industry would be greatly appreciated.

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >