Search Results

Search found 38931 results on 1558 pages for 'database testing'.

Page 573/1558 | < Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >

  • Connection String Incorrectly Formatted [migrated]

    - by Randy E
    I'm running into some issues. Every time I launch into debug mode and hit the "Create User" button, I'm getting an exception being thrown that is due to the Connection String either being in the wrong syntax or just wrong. Using Visual Studio 2010, project is .NET 3.5 with SQL 2008 Express. This is just a personal project that I'm testing some other things with, I know this generally isn't the recommended format but for a personal small project, I don't see the point in doing it any other way. The things I'm testing actually work :) Data Source=.\SQLEXPRESS;AttachDbFilename=|Data Directory|ASPAppDatabase.mdf;Integrated Security=True;User Instance=True That doesn't work. Data Source=.\SQLEXPRESS;AttachDbFilename="|Data Directory|ASPAppDatabase.mdf";Integrated Security=True;User Instance=True Neither does that. Data Source=.\SQLEXPRESS;AttachDbFilename='|Data Directory|ASPAppDatabase.mdf';Integrated Security=True;User Instance=True And again, neither does that :/ However, rather than using "|Data directory| if I use the full path to the local DB file it works just fine and no exception is thrown, and I can read and write as I need. And just to cover all my bases, here is the button click event that creates the User. protected void btnAddUser_Click(object sender, EventArgs e) { Membership.CreateUser(txtUserName.Text, txtPassword.Text); btnLogin_Click(sender, e); } So, what am I missing in regards to the |Data Directory|? Here is an example of the above not working correctly..taken directly from web.config. <add name="ASPAppDatabaseConnection" connectionString='Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|ASPAppDatabase.mdf;Integrated Security=True;User Instance=True'/>

    Read the article

  • Architecture : am I doing things right?

    - by Jeremy D
    I'm trying to use a '~classic' layered arch using .NET and Entity Framework. We are starting from a legacy database which is a little bit crappy: Inconsistent naming Unneeded views (view referencing other views, select * views etc...) Aggregated columns Potatoes and Carrots in the same table etc... So I ended with fully isolating my database structure from my domain model. To do so EF entities are hidden from presentation layer. The goal is to permit an easier database refactoring while lowering the impact of it on applications. I'm now facing a lot of challenges and I'm starting to ask myself if I'm doing things right. My Domain Model is highly volatile, it keeps evolving with apps as new fields needs are arising. Complexity of it keeps raising and class it contains start to get a lot of properties. Creating include strategy and reprojecting to EF is very tricky (my domain objects don't have any kind of lazy/eager loading relationship properties): DomainInclude<Domain.Model.Bar>.Include("Customers").Include("Customers.Friends") // To... IFooContext.Bars.Include(...).Include(...).Where(...) Some framework are raping the isolation levels (Devexpress Grids which needs either XPO or IQueryable for filtering and paging large data sets) I'm starting to ask myself if : the isolation of EF auto-generated entities is an unneeded cost. I should allow frameworks to hit IQueryable? Slow slope to hell? (it's really hard to isolate DevExpress framework, any successful experience?) the high volatility of my domain model is normal? Did you have similar difficulties? Any advice based on experience?

    Read the article

  • Is it conceivable to have millions of lists of data in memory in Python?

    - by Codemonkey
    I have over the last 30 days been developing a Python application that utilizes a MySQL database of information (specifically about Norwegian addresses) to perform address validation and correction. The database contains approximately 2.1 million rows (43 columns) of data and occupies 640MB of disk space. I'm thinking about speed optimizations, and I've got to assume that when validating 10,000+ addresses, each validation running up to 20 queries to the database, networking is a speed bottleneck. I haven't done any measuring or timing yet, and I'm sure there are simpler ways of speed optimizing the application at the moment, but I just want to get the experts' opinions on how realistic it is to load this amount of data into a row-of-rows structure in Python. Also, would it even be any faster? Surely MySQL is optimized for looking up records among vast amounts of data, so how much help would it even be to remove the networking step? Can you imagine any other viable methods of removing the networking step? The location of the MySQL server will vary, as the application might well be run from a laptop at home or at the office, where the server would be local.

    Read the article

  • Triggering Data Changes in N-Tier

    - by Ryan Kinal
    I've been studying n-tier architectures as of late, particularly in VB.NET with Entity Framework and/or LINQ to SQL. I understand the basic concepts, but have been wondering about best practices in regard to triggering CRUD-type operations from user input/action. So, the arcitecture looks something like the following: [presentation layer] - [business layer] - [data layer] - (database) Getting information from the database into the presentation layer is simple and abstracted. It's just a matter of instantiating a new object from the business layer, which in turn uses the data layer to get at the correct information. However, saving (updating and inserting), and deleting seem to require particular APIs on the relevant business objects. I have to assume this is standard practice, unless a business object will save itself on various operations (which seems inefficient), or on disposal (which seems like it just wouldn't work, or may be unwieldy and unreliable). Should my "savable" business objects all implement a particular "ISavable" or "IDatabaseObject" interface? Is this a recognized (anti-)pattern? Are there other, better patterns I should be using that I'm just unaware of? The TLDR question, I suppose, is How does the presentation layer trigger database changes?

    Read the article

  • Assuming "clean code/architecture" is there a difference in "effort" between PHP or Java/J2EE web application development?

    - by PhD
    A client asked us to estimate effort when selecting PHP as the implementation language for his next web-based application. We spent about a week exploring PHP, prototyping, testing etc., We are quite new to this language - may have hacked around it in the past but, let's go with PHP-noobs but application development experts (for the lack of a better, less flattering word :) It seems, that if we write, clean maintainable code, follow separation of concerns, enterprise architecture patters (DAOs etc.) the 'effort' in creating an object-oriented PHP based web-application seems to be the same for a Java based one. Here's our equation for estimating the effort (development/delivery time): ConstructionEffort = f(analysis, design, coding, testing, review, deployment) We were specifically comparing effort estimates in creating an enterprise application with the following: PHP + CakePHP/CodeIgniter (should we have considered others?) Java + Spring + Restlet It's an end-to-end application: Client: Javascript/jQuery + HTML/CSS Middle tier/Business Logic - (Still evaluating PHP/Java) Database: MySQL The effort estimates of the 1st and 3rd tier are constant and relatively independent of the middle tier's technology. At a high level with an initial breakdown into user stories of the requested features as well as a high-level SWAG on the sheer number of classes/SLOC that would be required for PHP doesn't seem to differ by much from what is required of the same in Java. Is this correct? We are basing our initial estimates on the initial prototyping/coding we've done with PHP - we are currently disregarding fluency with the language as a factor, since that'll be an initial hurdle and not a long term impediment IMHO (we also have sufficient time to become quite fluent with PHP). I'm interested in knowing the programmers' perspective with respect to effort when creating similar applications with either of the languages to justify choosing one over the other. Are we missing something here? It seems we are going against popular belief of PHP being quicker to market (or we being very fluent with Java have our vision clouded). It doesn't seem to have any coding/programming effort saving from what we/ve played around with.

    Read the article

  • How are objects modelled in a functional programming language?

    - by Giorgio
    In an answer to this question (written by Pete) there are some considerations about OOP versus FP. In particular, it is suggested that FP languages are not very suitable for modelling (persistent) objects that have an identity and a mutable state. I was wondering if this is true or, in other words, how one would model objects in a functional programming language. From my basic knowledge of Haskell I thought that one could use monads in some way, but I really do not know enough on this topic to come up with a clear answer. So, how are entities with an identity and a mutable persistent state normally modelled in a functional language? EDIT Here are some further details to clarify what I have in mind. Take a typical Java application in which I can (1) read a record from a database table into a Java object, (2) modify the object in different ways, (3) save the modified object to the database. How would this be implemented e.g. in Haskell? I would initially read the record into a record value (defined by a data definition), perform different transformations by applying functions to this initial value (each intermediate value is a new, modified copy of the original record) and then write the final record value to the database. Is this all there is to it? How can I ensure that at each moment in time only one copy of the record is valid / accessible? One does not want to have different immutable values representing different snapshots of the same object to be accessible at the same time.

    Read the article

  • What are some techniques I can use to refactor Object Oriented code into Functional code?

    - by tieTYT
    I've spent about 20-40 hours developing part of a game using JavaScript and HTML5 canvas. When I started I had no idea what I was doing. So it started as a proof of concept and is coming along nicely now, but it has no automated tests. The game is starting to become complex enough that it could benefit from some automated testing, but it seems tough to do because the code depends on mutating global state. I'd like to refactor the whole thing using Underscore.js, a functional programming library for JavaScript. Part of me thinks I should just start from scratch using a Functional Programming style and testing. But, I think refactoring the imperative code into declarative code might be a better learning experience and a safer way to get to my current state of functionality. Problem is, I know what I want my code to look like in the end, but I don't know how to turn my current code into it. I'm hoping some people here could give me some tips a la the Refactoring book and Working Effectively With Legacy Code. For example, as a first step I'm thinking about "banning" global state. Take every function that uses a global variable and pass it in as a parameter instead. Next step may be to "ban" mutation, and to always return a new object. Any advice would be appreciated. I've never taken OO code and refactored it into Functional code before.

    Read the article

  • How should I start refactoring my mostly-procedural C++ application?

    - by oob
    We have a program written in C++ that is mostly procedural, but we do use some C++ containers from the standard library (vector, map, list, etc). We are constantly making changes to this code, so I wouldn't call it a stagnant piece of legacy code that we can just wrap up. There are a lot of issues with this code making it harder and harder for us to make changes, but I see the three biggest issues being: Many of the functions do more (way more) than one thing We violate the DRY principle left and right We have global variables and global state up the wazoo. I was thinking we should attack areas 1 and 2 first. Along the way, we can "de-globalize" our smaller functions from the bottom up by passing in information that is currently global as parameters to the lower level functions from the higher level functions and then concentrate on figuring out how to removing the need for global variables as much as possible. I just finished reading Code Complete 2 and The Pragmatic Programmer, and I learned a lot, but I am feeling overwhelmed. I would like to implement unit testing, change from a procedural to OO approach, automate testing, use a better logging system, fully validate all input, implement better error handling and many other things, but I know if we start all this at once, we would screw ourselves. I am thinking the three I listed are the most important to start with. Any suggestions are welcome. We are a team of two programmers mostly with experience with in-house scripting. It is going to be hard to justify taking the time to refactor, especially if we can't bill the time to a client. Believe it or not, this project has been successful enough to keep us busy full time and also keep several consultants busy using it for client work.

    Read the article

  • Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 3)

    - by hinkmond
    So, let's now connect the parts together to make a Java Embedded ghost sensor using a Raspberry Pi. Grab your JFET transistor, LED light, wires, and breadboard and follow the connections on this diagram. The JFET transistor plugs into the breadboard with the flat part facing left. Then, plug in a wire to the same breadboard hole row as the top JFET lead (green in the diagram) and keep it unconnected to act as an antenna. Then, connect a wire (red) from the middle lead of the JFET transistor to Pin 1 on your RPi GPIO header. And, connect another wire (blue) from the lower lead of the JFET transistor to Pin 25 on your RPi GPIO header, then connect another (blue) wire from the lower lead of the JFET transistor to the long end of a common cathode LED, and finally connect the short end of the LED with a wire (black) to Pin 6 (ground) of the RPi GPIO header. That's it. Easy. Now test it. See: Ghost Sensor Testing Here's a video of me testing the Ghost Sensor circuit on my Raspberry Pi. We'll cover the Java SE app needed to record the ghost analytics in the next post. Hinkmond

    Read the article

  • Oracle Solaris 11.1 available today

    - by user12611852
    Today Oracle is pleased to announce availability of Oracle Solaris 11.1. Download Solaris 11.1 Order Solaris 11.1 media kitExisting customers can quickly and simply update using the network based repository Highlights include: 8x faster database startup and shutdown and online resizing of the database SGA with a new optimized shared memory interface between the database and Oracle Solaris 11.1 Up to 20% throughput increases for Oracle Real Application Clusters by offloading lock management into the Oracle Solaris kernel Expanded support for Software Defined Networks (SDN) with Edge Virtual Bridging enhancements to maximize network resource utilization and manage bandwidth in cloud environments 4x faster Solaris Zone updates with parallel operations shorten maintenance windows New built-in memory predictor monitors application memory use and provides optimized memory page sizes and resource location to speed overall application performance. Learn more and share these valuable tools with your customers to enable them to move to Oracle Solaris 11.1 quickly. Many customers wait for the first update --now is the time to encourage them to install Oracle Solaris 11.1. Oracle Solaris 11.1 Data Sheet  What's New in Oracle Solaris 11.1 Oracle Solaris 11.1 FAQs Oracle Solaris 11 .1 Customer Presentation Oracle Solaris 11.1 is recommended for all SPARC T4 Systems and will soon be available preinstalled.

    Read the article

  • Oracle OpenWorld 2012

    - by Maria Colgan
    I can't believe it's time for OpenWorld again! Oracle OpenWorld is the largest gathering of Oracle customers, partners, developers, and technology enthusiasts. This year it will take place between September 30th and October 4th in San Francisco. Of course, the Optimizer development group will be there and you will have multiple opportunities to meet the team, in one of our technical sessions, or at the Oracle Database demogrounds. This year the Optimizer team has 2 technical sessions, as well as a booth in the Oracle Database demogrounds. Tuesday, October 2nd at 1:15pm Oracle Optimizer: Harnessing the Power of Optimizer Hints Session CON8455 at Moscone South - room 103 In this session we will discuss in detail how optimizer hints are interpreted, when they should be used, and why they sometimes appear to be ignored. Thursday, October 4th at 12:45pm Oracle Optimizer: An Insider’s View of How the Optimizer Works Session CON8457 at Moscone South - room 104This session explains how the latest version of the optimizer works and the best ways you can influence its decisions to ensure you get optimal execution every time. It will also include a full history of the Cost Based Optimizer, so make sure you stick around for this one! If you have burning Optimizer or statistics related questions, or if you just want to pick up an Optimizer bumper sticker, you can stop by the Optimizer demo booth. This year we are located in booth 3157, in the Database area of the demogrounds, in Moscone South. Members of the Optimizer development team will be there Monday through Wednesday from 9:45 am until 6pm. The full Oracle OpenWorld catalog is on-line, or you can browse by speakers by name. So start planning your trip today! +Maria Colgan

    Read the article

  • Using Queries with Coherence Write-Behind Caches

    - by jpurdy
    Applications that use write-behind caching and wish to query the logical entity set have the option of querying the NamedCache itself or querying the database. In the former case, no particular restrictions exist beyond the limitations intrinsic to the Coherence query engine itself. In the latter case, queries may see partially committed transactions (e.g. with a parent-child relationship, the version of the parent may be different than the version of the child objects) and/or significant version skew (the query may see the current version of one object and a far older version of another object). This is consistent with "read committed" semantics, but the read skew may be far greater than would ever occur in a non-cached environment. As is usually the case, the application developer may choose to accept these limitations (with the hope that they are sufficiently infrequent), or they may choose to validate the reads (perhaps via a version flag on the objects). This also applies to situations where a third party application (such as a reporting tool) is querying the database. In many cases, the database may only be in a consistent state after the Coherence cluster has been halted.

    Read the article

  • My first blog post…

    - by steveh99999
    I’ve been meaning to start a blog for a while now, (OK, for several years…..) - finally now, here it begins First post, something really simple but, a wise-man once told me about the best way to improve SQL server performance. Store Less Data. That's it.. that's all there is to it... Over the years, I've seen the following :- -  a 200Gb database which held 3 days data. Once business requirements changed, we were able to hold only 1 days data in this database. -  a table developed by DBAs to hold application table cardinality information - that information was collected at 2 hour intervals every day for 7 years ! After 7 years the DBA space-info table had become the largest table in the database - 60 million rows !  It was a simple change to remove alot of the historical intra-day data and change the schedule to run only once per evening. Suddenly that table held 6 million rows instead of 60 million.... - lots of backup and restore history held in msdb. See this post by Brent Ozar for more details on this issue. Imagine how much faster the backups, DBCC Checks and reindexes ran when the above 3 changes were implemented ?   How often do you review your big databases \ tables to see if you’re actually holding only data that is really required by the business ?

    Read the article

  • If I send an IPA over TestFlight, can it be used to deploy to the app store?

    - by Reid Belton
    I am currently working for a small startup. I was previously under contract, now I am working for equity (no pay). The thing is, there is not yet a signed agreement in place as the details are being worked out. I may finish development before the contract is ready. I'm not currently under any contract or agreement, so the other party doesn't have any legal claim (that I know of) to the code I'm writing now, other than NDA (which just precludes me from cutting him out and releasing on my own). He already has the old code that I wrote under contract. I've made it clear to the other party that I won't submit the app or turn over the code until there's something signed to protect my interests. I've stopped pushing commits to the company repo (I'm now the only developer actively working on the project). However, I would still like to send builds over TestFlight for feedback and testing purposes. The other party has access to the developer portal and iTunes Connect for code signing, etc. Things are amicable and I don't foresee getting burnt on this, but I'm not going to put myself in that position. My concern is that if I send a finished build via TestFlight, it could be extracted and submitted to the app store without my participation. They wouldn't have the source for future maintenance and updates, of course, but it could be reverse-engineered by another developer later working from the old code base. Is this technically feasible at all? If so, is there a way I can send builds for testing while protecting my interests?

    Read the article

  • How to reduce the CPU load on a hosting with WordPress installed as a CMS? [on hold]

    - by Akky Awesøme
    I have been using hostgators hatchling plan for three months. I got an email from the hosting that my website is creating an over load on CPU. They said that I am eating up their processor and as a precaution, they have temporarily suspended my account. When I contacted their customer support, they said: You have to optimize your database and use some sort of caching mechanism, where the script does not need to generate a new page with every request, helps to lower the over load that a script will cause. I am not a technical geek, I am wondering how I will do this thing. I don't have any resource to hire a web developer to do this job. My website is down for 48hours. I was using wp super cache along with cloudfare's free support. Now I have intalled optimize-db plugin and optimized my database. Please provide me with some more tips on how to optimize my database to reduce CPU usage. Any help would be appreciated.

    Read the article

  • Suggestion: ALLFILES option for RESTORE

    - by Greg Low
    The default action when performing a backup is to append to the backup file yet the default action when restoring a backup is to restore just the first file.I constantly come across customer situations where they are puzzled that they seem to have lost data after they have completed a restore. Invariably, it's just that they haven't restored all the backups contained within a single OS file. This happens most commonly with log backups but also happens when they have not restored the most recent database backup file.It is not trivial to achieve this within simple T-SQL scripts, when the number of backup files within the OS file is unknown. It really should be.I'd like to see a FILES=ALLFILES option on the RESTORE command. For RESTORE DATABASE, it should restore the most recent database backup plus any subsequent log files. For RESTORE LOG (which is the most important missing option), it should just restore all relevant log backups that are contained.If you agree, you know what to do: please vote:  https://connect.microsoft.com/SQLServer/feedback/details/769204/option-to-restore-all-backups-files-within-a-media-setAlternately, how would you write a T-SQL command to restore all log backups within a single OS file where the number of files is unknown? Would love to hear creative solutions because all the ones that I think of are pretty messy and need dynamic SQL. 

    Read the article

  • Call DB Stored Procedure using @NamedStoredProcedureQuery Injection

    - by anwilson
    Oracle Database Stored Procedure can be called from EJB business layer to perform complex DB specific operations. This approach will avoid overhead from frequent network hits which could impact end-user result. DB Stored Procedure can be invoked from EJB Session Bean business logic using org.eclipse.persistence.queries.StoredProcedureCall API. Using this approach requires more coding to handle the Session and Arguments of the Stored Procedure, thereby increasing effort on maintenance. EJB 3.0 introduces @NamedStoredProcedureQuery Injection to call Database Stored Procedure as NamedQueries. This blog will take you through the steps to call Oracle Database Stored Procedure using @NamedStoredProcedureQuery.EMP_SAL_INCREMENT procedure available in HR schema will be used in this sample.Create Entity from EMPLOYEES table.Add @NamedStoredProcedureQuery above @NamedQueries to Employees.java with definition as given below - @NamedStoredProcedureQuery(name="Employees.increaseEmpSal", procedureName = "EMP_SAL_INCREMENT", resultClass=void.class, resultSetMapping = "", returnsResultSet = false, parameters = { @StoredProcedureParameter(name = "EMP_ID", queryParameter = "EMPID"), @StoredProcedureParameter(name = "SAL_INCR", queryParameter = "SALINCR")} ) Observe how Stored Procedure's arguments are handled easily in  @NamedStoredProcedureQuery using @StoredProcedureParameter.Expose Entity Bean by creating a Session Facade.Business method need to be added to Session Bean to access the Stored Procedure exposed as NamedQuery. public void salaryRaise(Long empId, Long salIncrease) throws Exception { try{ Query query = em.createNamedQuery("Employees.increaseEmpSal"); query.setParameter("EMPID", empId); query.setParameter("SALINCR", salIncrease); query.executeUpdate(); } catch(Exception ex){ throw ex; } } Expose business method through Session Bean Remote Interface. void salaryRaise(Long empId, Long salIncrease) throws Exception; Session Bean Client is required to invoke the method exposed through remote interface.Call exposed method in Session Bean Client main method. final Context context = getInitialContext(); SessionEJB sessionEJB = (SessionEJB)context.lookup("Your-JNDI-lookup"); sessionEJB.salaryRaise(new Long(200), new Long(1000)); Deploy Session BeanRun Session Bean Client.Salary of Employee with Id 200 will be increased by 1000.

    Read the article

  • Lots of goodies

    - by wcoekaer
    We just issued a press release with a number of very good updates for everyone There are a few things of importance : 1) As of right now, Oracle Linux 6 with the Unbreakable Kernel is certified with a number of Oracle products such as Oracle Database 11gR2 and Oracle Fusion Middleware. The certification pages in the Oracle Support portal will be updated with the latest certification status for the various products. As always we have gone through a long period of very comprehensive testing and validation to ensure that the whole stack works really well together, with very large database workloads, middleware application workloads etc. 2) Standard certification efforts for Oracle Linux 6 with the Red Hat Compatible Kernel are in progress and we expect that to be completed in the next few months. Because of the compatibility between OL6 and RHEL6 we can then also state certification for RHEL6. 3) Oracle Linux binaries (and of course source code) have been free for download -and- use (including production, not just trial periods) since day one. You can freely redistribute the binaries, unlike many other Linux vendors where you need to pay a support subscription to even get access to the binaries. We offered both the base distribution release DVDs (OL4, OL5, OL6) and the update releases, such as 5.1, 5.2 etc. this way. Today, in this announcement, we also started to make available the bugfix and security updates released in between these update releases. So the errata streams (both binary and source code) for OL4, 5 and 6 are now free for download and use from http://public-yum.oracle.com. This includes uek and uek2. The nice thing is, if you want a complete up to date system without support, use this, if you then need support, get a support subscription. Simple, convenient, effective. We have great SLA's in producing our update streams, consistency in release timing and testing of all the components. Have at it!

    Read the article

  • QueryUnit 0.0.0.8 – Trust No One

    - by Davide Mauri
    Yesterday I’ve release an updated version of QueryUnit, the version 0.0.0.8. QueryUnit now supports AreNotEqual, Greater, and Less assertions and is more capable of managing strings results. I must say that I cannot live anymore without a proper Unit Testing of a BI solution. Just yesterday happened that one of the unit tests at a customer site failed showing a subtle situation where the release of a new version of custom application would have corrupted the source of BI data with a very low chance that someone would have noticed it before several days. It may happen when you have more the 15 systems that handles the data needed by your BI solution. The key message of this situation is “Trust No One”: if your data hasn’t passed quality testing it’s not trustable. Period. QueryUnit is now officialy an hero :) No superpowers still, but useful above all. http://queryunit.codeplex.com/ Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Cloning from a given point in the snapshot tree

    - by Fat Bloke
    Although we have just released VirtualBox 4.3, this quick blog entry is about a longer standing ability of VirtualBox when it comes to Snapshots and Cloning, and was prompted by a question posed internally, here in Oracle: "Is there a way I can create a new VM from a point in my snapshot tree?". Here's the scenario: Let's say you have your favourite work VM which is Oracle Linux based and as you installed different packages, such as database, middleware, and the apps, you took snapshots at each point like this: But you then need to create a new VM for some other testing or to share with a colleague who will be using the same Linux and Database layers but may want to reconfigure the Middleware tier, and may want to install his own Apps. All you have to do is right click on the snapshot that you're happy with and clone: Give the VM that you are about to create a name, and if you plan to use it on the same host machine as the original VM, it's a good idea to "Reinitialize the MAC address" so there's no clash on the same network: Now choose the Clone type. If you plan to use this new VM on the same host as the original, you can use Linked Cloning else choose Full.  At this point you now have a choice about what to do about your snapshot tree. In our example, we're happy with the Linux and Database layers, but we may want to allow our colleague to change the upper tiers, with the option of reverting back to our known-good state, so we'll retain the snapshot data in the new VM from this point on: The cloning process then chugs along and may take a while if you chose a Full Clone: Finally, the newly cloned VM is ready with the subset of the Snapshot tree that we wanted to retain: Pretty powerful, and very useful.  Cheers, -FB 

    Read the article

  • PARTNER News: Tips and Guidelines from Avago (formerly LSI)

    - by Zeynep Koch
    In this blog write-up we would like to focus our attention to one of our IHV partners, Avago (formerly LSI) . Avago and Oracle have been collaborating at many levels for many years.  At the lowest level, Avago and Oracle engineer solutions to inbox advanced features in our I/O device drivers.  We collaborate to test, verify and optimize these drivers in Oracle Linux with Unbreakable Enterprise Kernel. Both LSI Nytro and Sun F-Series PCIe flash devices are supported inbox in Oracle Linux with Unbreakable Enterprise Kernel. By collaborating early in the engineering design cycle we can find and resolve issues sooner and deliver to the end-customer a fully optimized platform for I/O efficiency and data protection.  Hear more about the partnership and benefits in this podcast  LSI and Oracle Partnership. Avago had also been working on technical whitepaper and video whiteboard to explain some of the optimizations you can achieve by using smart flash cache with Oracle Linux.  Technical Paper: Improve Database Performance Using Sun Flash Accelerator Card, Database Smart Flash Cache and Oracle Linux Video: Improving DB Performance with Database Smart Flash Cache If you want more information about the partnership and product benefits, you can visit the LSI Oracle alliance page. 

    Read the article

  • Choosing a CMS to use with backend modules involving haskell and python [on hold]

    - by Butterflycode
    Hi I am trying to decide on a CMS to use for a new project. Security is the most important element of the CMS. I am looking to use a PHP based CMS such as Joomla or Drupal however, PHP has many security flaws which worries me. The data which needs to be secure will be inside a database and relate to account information. I am wondering what is the best way to do this? What I am wanting is a frontend which is made in php/js(joomla) and then I have a backend api which is written in Haskell to handle money transfers ensuring nothing goes wrong. In between the two I want a controller written in perhaps Python or C. I never want the php to touch the database. I want it to relay messages to the controller that's written in python or C and then it inputs to the database, sanitising data etc Am I perhaps thinking too deeply about this? Just wondering if anyone has any ideas on what I should do.... I can't quite explain what the project is as I don't want the idea to be stolen, but it has a lot money transactions involved so security is essential.

    Read the article

  • What makes you look like a bad developer (ie a hacker) [on hold]

    - by user134583
    This comes from a lot of people about me, so I have to look at myself. So I would wonder what make one a bad developer (ie a hacker). These are a few things about me I used IDE intensively, all features, you name it: auto-completion, refactoring, quick fixes, open type, view hierarchy, API documentation, etcc When I deal with writing code for a project in domain I am not used to (I can't have fluency in this, this is new), I only have a very rough high level ideas. I don't use the standard modeling diagrams for early detail planning. Unorthodox diagrams that I invented when I need to draw the design in details. I don't use UML or similar, I find them not enough. I divide the sorts of diagram I drew into 3 types. Very high level diagrams which probably can be understood by almost anybody. Data entity diagram used for modeling data objects only (like ER diagrams and tree for inheritances and composition). Action diagrams for agents/classes and their interactions on data objects they contain. Constantly changing the interface (public methods) between interacting agents/classes if the need arises. I am more refrained when the interface and the module have matured Write initial concept code in a quick hackie way just so that the module works in the general cases so that I can play around with it. The module will be re-factored intensively after playing around so I could see more corner cases that I couldn't or (wouldn't want) anticipate before writing code. Using JUnit for integration-like test by using TestSuite class and ordering Unit test classes in the suite Using debugger almost anytime there is a problem instead of reading the code Constantly search on the internet for how to do some thing with some library that I haven't used a lot. So judgment, am I a bad developer? a hacker? Put in other words, to make sure this is not considered off-topic: - Is this bad practice to make your code too agile during incubating/prototyping phase of software development - Is it bad practice to use JUnit for integration testing, (I know there are other framework for integration testing, but those frameworks are for a specific products, not general)

    Read the article

  • SQL Server Configuration Scripting Utility Release 9

    - by Bill Graziano
    There’s another update to my little utility to script a SQL Server’s configuration.  I use this for two purposes.  First, I use it to keep my database mirroring servers up to date.  Second, I capture the output in a version control system and keep that for historical reference. In release 3.0.9 I made the following changes: Rewrote the encrypted trigger scripting.  It will now list the encrypted triggers in a comment in the table script but can’t actually script them. It now scripts any server event notifications. You can script a single database using the /scriptdb flag.  Please note that it will also script the instance and system databases when it does this. It will script any user-defined endpoints.  This will capture your mirroring endpoints and more importantly any service broker endpoints. It will gracefully skip database mail on the Express Edition. It still doesn’t support SQL Server 2012.  I think that’s the next feature to add though.

    Read the article

  • Best approach for tracking dependent state

    - by Pace
    Let's pretend I work on a project tracking application. The application is a database backed, server hosted, web application. In this application there are Projects which have many Activities which have many Tasks. A Task has two date fields an originalDueDate and a projectedDueDate. In addition, there are dynamic fields on the Activities and the Projects which indicate whether the Activity or Project is behind schedule based on the projected due dates of the child tasks and various other variables such as remaining buffer time, etc. There are a number of things that can cause the projectedDueDate to change. For example, an employee working on the project may (via a server request) enter in a shipping delay. Alternatively, a site may (via a server request) enter in an unexpected closure. When any of these things occur I need to not only update the projectedDueDate of the Task but also trigger the corresponding Project and Activity to update as well. What is the best way to do this? I've thought of the observer pattern but I don't keep a single copy of all these objects in memory. When a request comes in, I query the Task in from the database, at that point there is no associated Activity in memory that would be a listener. I could remove the ability to query for Tasks and force the application to query first by Project, then by Activity (in context of Project), then by task (in context of Activity) adding the observer relationships at each step but I'm not sure if that is the best way. I could setup a database event listening system so when a Task modified event is dispatched I have a handler which queries for the Activity at that point. I could simply setup a two-way relationship between Task and Activity so that the Task knows about the parent Activity and when the Task updates his state the Task grabs his parent and updates state. Right now I'm stuck considering all the options and am wondering if any single approach (doesn't have to be a listed approach) is jumping out at others as the best approach.

    Read the article

< Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >