Search Results

Search found 6735 results on 270 pages for 'pre commit'.

Page 128/270 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • Debugging/Logging Techniques for End Users

    - by James Burgess
    I searched a bit, but didn't find anything particularly pertinent to my problem - so please do excuse me if I missed something! A few months back I inherited the source to a fairly-popular indie game project and have been working, along with another developer, on the code-base. We recently made our first release since taking over the development but we're a little stuck. A few users are experiencing slowdowns/lagging in the current version, as compared to the previous version, and we are not able to reproduce these issues in any of our various development environments (debug, release, different OSes, different machines, etc.). What I'd like to know is how can we go about implementing some form of logging/debugging mechanism into the game, that users can enable and send the reports to us for examination? We're not able to distribute debug binaries using the MSVS 2010 runtimes, due to the licensing - and wouldn't want to, for a variety of reasons. We'd really like to get to the bottom of this issue, even if just to find out it's nothing to do with our code base but everything to do with their system configuration. At the moment, we just have no leads - and the community isn't a very technically-savvy one, so we're unable to rely on 'expert' bug reports or investigations. I've seen the debug logging mechanism used in other applications and games for everything from logging simple errors to crash dumps. We're really at a loss at this stage as to how to address these issues, having been over every commit to the repository from the previous to the current version and not finding any real issues.

    Read the article

  • Git Project Dependencies on GitHub

    - by VirtuosiMedia
    I've written a PHP framework and a CMS on top of the framework. The CMS is dependent on the framework, but the framework exists as a self-contained folder within the CMS files. I'd like to maintain them as separate projects on GitHub, but I don't want to have the mess of updating the CMS project every time I update the framework. Ideally, I'd like to have the CMS somehow pull the framework files for inclusion into a predefined sub-directory rather than physically committing those files. Is this possible with Git/GitHub? If so, what do I need to know to make it work? Keep in mind that I'm at a very, very basic level of experience with Git - I can make repositories and commit using the Git plugin for Eclipse, connect to GitHub, and that's about it. I'm currently working solo on the projects, so I haven't had to learn much more about Git so far, but I'd like to open it up to others in the future and I want to make sure I have it right. Also, what should my ideal workflow be for projects with dependencies? Any tips on that subject would also greatly appreciated. If you need more info on my setup, just ask in the comments.

    Read the article

  • How to ignore certain coding standard errors in PHP CodeSniffer

    - by Tom
    We have a PHP 5 web application and we're currently evaluating PHP CodeSniffer in order to decide whether forcing code standards improves code quality without causing too much of a headache. If it seems good we will add a SVN pre-commit hook to ensure all new files committed on the dev branch are free from coding standard smells. Is there a way to configure PHP codeSniffer to ignore a particular type of error? or get it to treat a certain error as a warning instead? Here an example to demonstrate the issue: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> </head> <body> <div> <?php echo getTabContent('Programming', 1, $numX, $numY); if (isset($msg)) { echo $msg; } ?> </div> </body> </html> And this is the output of PHP_CodeSniffer: > phpcs test.php -------------------------------------------------------------------------------- FOUND 2 ERROR(S) AND 1 WARNING(S) AFFECTING 3 LINE(S) -------------------------------------------------------------------------------- 1 | WARNING | Line exceeds 85 characters; contains 121 characters 9 | ERROR | Missing file doc comment 11 | ERROR | Line indented incorrectly; expected 0 spaces, found 4 -------------------------------------------------------------------------------- I have a issue with the "Line indented incorrectly" error. I guess it happens because I am mixing the PHP indentation with the HTML indentation. But this makes it more readable doesn't it? (taking into account that I don't have the resouces to move to a MVC framework right now). So I'd like to ignore it please.

    Read the article

  • Latest News on Service, Field Service and Depot Repair Products

    - by LuciaC
    Service and Depot Repair Customer Advisory Boards (CAB) In November 2012 the Service and Depot Repair CAB joined together for a combined meeting at Oracle HQ in Redwood Shores, California to discuss all the latest news in the Oracle Service, Field Service and Depot Repair products.  Over four days attendees shared their experiences with implementing and using these EBS CRM products and heard details of recent enhancements and future product plans direct from Development. You can access all the Oracle presentations via Doc ID 1511768.1.  Here are just some of the highlights: Field Service: Next Generation Dispatch Center Endeca Integration Case Study: Oracle Sun Field Service implementation. Mobile Field Service: New capabilities for technician-facing applications Service: Integration with Oracle Projects New Teleservice enhancements Spares Management: Supplier Warranty External Repair Execution Oracle Knowledge (Inquira) Introduction for Service Organizations If you weren't at the CAB, take a look at these presentations for great information about what's new and what's coming up in these products. 12.1.3++ Features for Field Service, Mobile Field Service, Spares Management, FSTP & Advanced Scheduler In June 2012 the R12.1.3++ patches were released for Field Service, Mobile Field Service, FSTP and Advanced Scheduler.  These patches contain new and updated functionality for these CRM Service suite modules.  New functionality includes: Field Service/FSTP/MFS: Support for Transfer Parts across subinventories in different organizations Validation to ensure Installed Item matches Returned Item MFS Wireless - Support fro Special Address Creation MFS Wireless - Enhanced Debrief Flow Advanced Scheduler Scheduler UI - Display of Spares Sourcing Information Auto Commit (Release) Tasks by Territory Dispatch Center UI - Display Spare Parts Arrival Information Spares Management Enhancements to the Task Reassignment Process Enhancements to the Parts Requirements UI Supply Chain Enhancements to allow filtering of ship methods from source location by distance. Check the following notes for more details and relevant patch numbers:Doc ID 1463333.1 - Oracle Field Service Release Notes, Release 12.1.3++Doc ID 1452470.1 - Field Service Technician Portal 12.1.3++ New FeaturesDoc ID 1463066.1 - Oracle Advanced Scheduler Release Notes, Release 12.1.3++ Doc ID 1463335.1 - Oracle Spares Management Release Notes, Release 12.1.3++ Doc ID 1463243.1 - Oracle Mobile Field Service Release Notes, Release 12.1.3++

    Read the article

  • C IDE for Mac needed

    - by StasM
    I'm looking for heavy-duty C/C++ IDE for Mac that would satisfy the following criteria: Work with big projects (~5000 files, some of them 100K big) efficiently. Have good navigation both file-based and symbol-based - i.e. "go to file", "go to function" etc. with autocompletion support. Support for "go to declaration/definition" for symbols - functions, structures, etc. Auto-adding new files in folders already in the project. Support for code completion for values, function names, etc. At least rudimentary CPP macro understanding - i.e. #define foo bar then foo() should take me either to #define or to actual bar. I understand full CPP parsing may be hard, but I hope for at least the obvious cases. Support for displaying parameter names/types by function name, preferably - integrated with the previous item, for functions defined in the project. Support for libc would be nice too :) (optional) Cross-project search support (I can manage with grep -r if everything else works) (optional) SVN support, at least to some extent (update, commit, mark updated) Is there such editor around? Free would be nice, but I'm ready to part with some money if it's good enough. I'm using TextMate now but I'm not satisfied with it. Tried Xcode but it seems to not be able to handle a large project - it just crashed...

    Read the article

  • How atomic is a SELECT INTO?

    - by leo.pasta
    Last week I got an interesting situation that prompted me to challenge a long standing assumption. I always thought that a SELECT INTO was an atomic statement, i.e. it would either complete successfully or the table would not be created. So I got very surprised when, after a “select into” query was chosen as a deadlock victim, the next execution (as the app would handle the deadlock and retry) would fail with: Msg 2714, Level 16, State 6, Line 1 There is already an object named '#test' in the database. The only hypothesis we could come up was that the “create table” part of the statement was committed independently from the actual “insert”. We can confirm that by capturing the “Transaction Log” event on Profiler (filtering by SPID0). The result is that when we run: SELECT * INTO #results FROM master.sys.objects we get the following output on Profiler: It is easy to see the two independent transactions. Although this behaviour was a surprise to me, it is very easy to workaround it if you feel the need (as we did in this case). You can either change it into independent “CREATE TABLE / INSERT SELECT” or you can enclose the SELECT INTO in an explicit transaction: SET XACT_ABORT ON BEGIN TRANSACTION SELECT * INTO #results FROM master.sys.objects COMMIT

    Read the article

  • Should I ditch a creative pet project in lieu of one that would demonstrate skills more applicable to an employer?

    - by Hart Simha
    I am currently working on a project on github that I think would be a good demonstration of my initiative, creativity and enthusiasm. It is an educational game I am developing in pygame that enables the user to learn to improve their development productivity by using vim, specifically with python, though learning to code faster with vim should be transferable to any language. I think this is something that might have a mass appeal and benefit to a lot of people in a measurable way. -However- I am graduating from college in a month (my degree is computer science with a minor in english), with no experience that is relevant to helping me get any kind of job in the field, and a gpa that doesn't tout my merits. I could pursue a career in game development, but it's not necessarily what I'm most interested in, and see myself applying to startups around the country. To the places I am looking at applying, showing that I have experience with pygame is going to be largely irrelevant, except in demonstration of my ability to code, period. A lot of skills that ARE more marketable, such a data modeling, GIS, mobile development, javascript, .net framework, and various web development technologies, are not going to be showcased by this project (on the upside, employers do like to see familiarity with git and python). I'm wondering if I should sink all my free time in the next couple of months into this project, since I'm motivated and interested in it, and if the value of being able to demonstrate ambition and 'good ideas' (for lack of a better term, and in my own opinion) will compensate for the absence of demonstrating more sought-after skills. I am probably at a point where I should either commit fully to this project now, or put it on the backburner in favor of something else, and I am leaning towards continuing with what I am already working on, because I think it's a great idea, and something achievable to me with enough dedication over the next couple months. But the most important thing to me is being able to get a job out of college, which I am exceedingly concerned about as the professional landscape which I am navigating for the first time is a lot more intimidating than I could have anticipated, with almost every job (even short-term contract positions) requiring years of experience which I lack. Oh, and in case anyone is interested, my repository is here: www.github.com/hmsimha/vimagine

    Read the article

  • .NET Demon 1.0 Released

    - by theo.spears
    Today we're officially releasing version 1.0 of .NET Demon, the Visual Studio Extension Alex Davies and I have been working on for the last 6 months. There have been beta versions available for a while, but we have now released the first "official" version and made it available to purchase. If you haven't yet tried the tool, it's all about reducing the time between when you write a line of code and when you are able to try it out so you don't have to wait: Continuous compilation We use spare CPU cycles on your machine to compile your code in the background when you make changes, so assemblies are up to date whenever you want to run them. Some clever logic means we only recompile code which may have been affected by your changes. Continuous save .NET Demon can perform background saving, so you don't lose any work in case of crashes or power failures, and are less likely to forget to commit changed files. Continuous testing (Experimental) The testing tool in .NET Demon watches which code you change in your solution, and automatically reruns tests which are impacted, so you learn about any breaking changes as quickly as possible. It also gives you inline test coverage information inside Visual Studio. Continuous testing is still experimental - it will work fine in many cases, but we know it's not yet perfect. Releasing version 1.0 doesn't mean we're pausing development or pushing out improvements. We will still be regularly providing new versions with improved functionality and fixes for any bugs people come across. Visit the .NET Demon product page to download

    Read the article

  • SQL SERVER – Using MAXDOP 1 for Single Processor Query – SQL in Sixty Seconds #008 – Video

    - by pinaldave
    Today’s SQL in Sixty Seconds video is inspired from my presentation at TechEd India 2012 on Speed up! – Parallel Processes and Unparalleled Performance. There are always special cases when it is about SQL Server. There are always few queries which gives optimal performance when they are executed on single processor and there are always queries which gives optimal performance when they are executed on multiple processors. I will be presenting the how to identify such queries as well what are the best practices related to the same. In this quick video I am going to demonstrate if the query is giving optimal performance when running on single CPU how one can restrict queries to single CPU by using hint OPTION (MAXDOP 1). More on Errors: Difference Temp Table and Table Variable – Effect of Transaction Effect of TRANSACTION on Local Variable – After ROLLBACK and After COMMIT Debate – Table Variables vs Temporary Tables – Quiz – Puzzle – 13 of 31 I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Video

    Read the article

  • CodeGolf : Find the Unique Paths

    - by st0le
    Here's a pretty simple idea, in this pastebin I've posted some pair of numbers. These represent Nodes of a unidirected connected graph. The input to stdin will be of the form, (they'll be numbers, i'll be using an example here) c d q r a b d e p q so x y means x is connected to y (not viceversa) There are 2 paths in that example. a->b->c->d->e and p->q->r. You need to print all the unique paths from that graph The output should be of the format a->b->c->d->e p->q->r Notes You can assume the numbers are chosen such that one path doesn't intersect the other (one node belongs to one path) The pairs are in random order. They are more than 1 paths, they can be of different lengths. All numbers are less than 1000. If you need more details, please leave a comment. I'll amend as required. Shameless-Plug For those who enjoy Codegolf, please Commit at Area51 for its very own site:) (for those who don't enjoy it, please support it as well, so we'll stay out of your way...)

    Read the article

  • How to use the unit of work and repository patterns in a service oriented enviroment

    - by A. Karimi
    I've created an application framework using the unit of work and repository patterns for it's data layer. Data consumer layers such as presentation depend on the data layer design. For example a CRUD abstract form has a dependency to a repository (IRepository). This architecture works like a charm in client/server environments (Ex. a WPF application and a SQL Server). But I'm looking for a good pattern to change or reuse this architecture for a service oriented environment. Of course I have some ideas: Idea 1: The "Adapter" design pattern Keep the current architecture and create a new unit of work and repository implementation which can work with a service instead of the ORM. Data layer consumers are loosely coupled to the data layer so it's possible but the problem is about the unit of work; I have to create a context which tracks the objects state at the client side and sends the changes to the server side on calling the "Commit" (Something that I think the RIA has done for Silverlight). Here the diagram: ----------- CLIENT----------- | ------------------ SERVER ---------------------- [ UI ] -> [ UoW/Repository ] ---> [ Web Services ] -> [ UoW/Repository ] -> [DB] Idea 2: Add another layer Add another layer (let say "local services" or "data provider"), then put it between the data layer (unit of work and repository) and the data consumer layers (like UI). Then I have to rewrite the consumer classes (CRUD and other classes which are dependent to IRepository) to depend on another interface. And the diagram: ----------------- CLIENT ------------------ | ------------------- SERVER --------------------- [ UI ] -> [ Local Services/Data Provider ] ---> [ Web Services ] -> [ UoW/Repository ] -> [DB] Please note that I have the local services layer on the current architecture but it doesn't expose the data layer functionality. In another word the UI layer can communicate with both of the data and local services layers whereas the local services layer also uses the data layer. | | | | | | | | ---> | Local Services | ---> | | | UI | | | | Data | | | | | | | ----------------------------> | |

    Read the article

  • Selecting a JAX-RS implementation for a new project

    - by Fernando Correia
    I'm starting a new Java project which will require a RESTful API. It will be a SaaS business application serving mobile clients. I have developed one project with Java EE 6, but I'm not very familiar with the ecosystem, since most of my experience is on the Microsoft platform. Which would be a sensible choice for a JAX-RS implementation for a new project such as described? Judging by Wikipedia's list, main contenders seem to be Jersey, Apache CXF, RESTeasy and Restlet. But the Comparison of JAX-RS Implementations cited on Wikipedia is from 2008. My first impressings from their respective homepages is that: CXF aims to be a very comprehensive solution (reminds me of WCF in the Microsoft space), which makes me think it can be more complex to understand, setup and debug than what I need; Jersey is the reference implementation and might be a good choice, but it's legacy from Sun and I'm not sure how Oracle is treating it (announcements page doesn't work and last commit notice is from 4 months ago); RESTeasy is from JBoss and probably a solid option, though I'm not sure about learning curve; Restlet seems to be popular but has a lot of history, I'm not sure how up-to-date it is in the Java EE 6 world or if it carries a heavy J2EE mindset (like lots of XML configuration). What would be the merits of each of these alternatives? What about learning curve? Feature support? Tooling (e.g. NetBeans or Eclipse wizards)? What about ease of debugging and also deployment? Is any of these project more up-to-date than the others? How stable are them?

    Read the article

  • Looking for a simple web interface with subversion support and ticket /issue tracker [closed]

    - by Stefan Andre Brannfjell
    I am working on a small project and we have a few programmers on the job. We are using subversion to commit updates and keep all developers up to date on their workstations. However, we have yet to find a suitable web interface to use for it. I have tried redmine, but that installation progress was extremely bothersome and advanced. Once I got it to work I found out that it was slow and did not meet my expectations. As well as it seems a bit complex for our needs. I would prefer to find a solution that supports lighttpd web server, however that seem to be very hard to come by, those I have found seem to only have apache support. Functionality i wish for the website: - login to an svn account - view svn logs - view & create issues, todo list etc - view svn difference Do you have any open source recommendations that I can try out? I will appreciate any kind of reply. :) Edit: I wish to host the website on our own servers.

    Read the article

  • Improve Customer Experience with Real-Time Scheduling

    - by ruth.donohue
    Recently, my husband rearranged his busy work schedule so that he could stay home an entire afternoon to wait for the alarm company to reset the password to our alarm system, only to discover at the end of the afternoon that the field service rep wasn’t going to be able to make the appointment after all. And, the company asked him to reschedule and block off time for another afternoon. Needless to say, my husband wasn’t happy with that experience. Unfortunately, customer experiences like this happen every day. As a business, you can’t afford these types of encounters. It’s too easy for your customers to turn to one of your competitors once they’ve reached the point of frustration. Customer experience and customer loyalty are more important than ever. So how can you prevent something like this from occurring? With the newly available Siebel Field Service Integration with Oracle Real-Time Scheduler, your service organization can: Create cost-optimized plans and schedules to improve operating efficiencies Deliver more accurate ETA’s and shorten appointment windows Minimize the impact of in-day events such as delays on site, sickness, poor weather conditions, and vehicle breakdowns Rather than requiring them to wait for an entire afternoon, imagine asking customers to be available for only an hour. And being able to commit to that time by working around unforeseen events and understanding the impact of delays or re-routings before they become customer issues. What would your customer experience and customer satisfaction be like then? Learn more about the Siebel Field Service Integration with Oracle Real-Time Scheduler: Register for and attend the upcoming webcast on Thursday, March 10th at 8:30 AM Pacific Time Read the press release, data sheet, and solution brief Visit the Siebel Field Service webpage

    Read the article

  • Should one generally develop a client library for REST services to help prevent API breakages?

    - by BestPractices
    We have a project where UI code will be developed by the same team but in a different language (Python/Django) from the services layer (REST/Java). The code for each layer exits in different code repositories and which can follow different release cycles. I'm trying to come up with a process that will prevent/reduce breaking changes in the services layer from the perspective of the UI layer. I've thought to write integration tests at the UI layer level that we'll run whenever we build the UI or the services layer (we're using Jenkins as our CI tool to build the code which is in two Git repos) and if there are failures then something in the services layer broke and the commit is not accepted. Would it also be a good idea (is it a best practice?) to have the developer of the services layer create and maintain a client library for the REST service that exists in the UI layer that they will update whenever there is a breaking change in their Service API? Conceivably, we would then have the advantage of a statically-typed API that the UI code builds against. If the client library API changes, then the UI code won't compile (so we'll know sooner that there was a breaking change). I'd also still run the integration tests upon building the UI or services layer to further validate that the integration between UI and the service(s) still works.

    Read the article

  • Can't Mount Phone, "according to mtab, /dev/sdb1 is already mounted on /"

    - by RPG Master
    My myTouch Slide wasn't mounting, so I decided to open Disk Utility. My phone shows up but when I click "Mount" it gives me this error: Error mounting: mount exited with exit code 1: helper failed with: mount: according to mtab, /dev/sdb1 is already mounted on / mount failed Here's my mtab: /dev/sdb1 / ext4 rw,errors=remount-ro,commit=0 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 none /sys sysfs rw,noexec,nosuid,nodev 0 0 fusectl /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 none /dev devtmpfs rw,mode=0755 0 0 none /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 none /dev/shm tmpfs rw,nosuid,nodev 0 0 none /var/run tmpfs rw,nosuid,mode=0755 0 0 none /var/lock tmpfs rw,noexec,nosuid,nodev 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,noexec,nosuid,nodev 0 0 gvfs-fuse-daemon /home/matthew/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,user=matthew 0 0 /dev/sdg1 /media/Seagate\040GoFlex ext4 rw,nosuid,nodev,uhelper=udisks 0 0 EDIT: Here's my fstab: # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 /dev/sda1 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=3b0db205-2bdb-4c98-a506-6bdd3520d540 none swap sw 0 0

    Read the article

  • mdadm: breaks boot due to "is not ready yet or not present" error

    - by BarsMonster
    This is so damn frustrating :-| I've spent like 20 hours on this nice error, and seems like dozens of people over Internet too, and no clear solution yet. I have non-system RAID-5 of 5 disks, and it's fine. But during boot up it says that "/dev/md0 is not ready yet or not present" and asks to press 'S'. Very nice for Ubuntu Server - I have to bring monitor and keyboard to go next. After this system boots and it's all fine. md0 device works, /proc/mdstat is fine. When I do mount -a - it mounts this array without errors and works fine. As a dumb and shameful workaround I added noauto in /etc/fstab, and did mounting in /etc/rc.local - it works fine then. Any hints how to make it work properly? fstab: UUID=3588dfed-47ae-4c32-9855-2d69df713b86 /var/bigfatdisk ext4 noauto,noatime,data=writeback,barrier=0,nobh,commit=5 0 0 mdadm config: It is autogenerated: # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR CENSORED # definitions of existing MD arrays ARRAY /dev/md/0 metadata=1.2 bitmap=/var/md0_intent UUID=efccbeb6:a0a65cd6:470dcdf3:62781188 name=LBox2:0 # This file was auto-generated on Mon, 10 Jan 2011 04:06:55 +0200 # by mkconf 3.1.2-2

    Read the article

  • Use Oracle Product Hub Business Events to Integrate Additional Logic into Your Business Flows

    - by ToddAC-Oracle
    Business events provide a mechanism to plug-in and integrate some additional business processes or custom code into standard business flows.  You could send a notification to a business User, write to advanced queues or perform some custom processes. In-built business events are available specifically for each flow like Item Creation, Item Updation, User-Defined Attribute Changes, Change Order Creation, Change Order Status Changes and others.To get a list of business events, refer to the PIM implementation Guide or Using Business Events in PLM and PIM Data Librarian (Doc ID 372814.1) .If you are planning to use business events, Doc ID 1074754.1 walks you through a setup with examples. How to Subscribe and Use Product Hub (PIM / APC) Business Events [Video] ? (Doc ID 1074754.1). Review the 'Presentation' section of Doc ID 1074754.1 for complete information and best practices to follow while implementing code for subscriptions. Learn things you might want to avoid, like commit statements for instance. Doc ID 1074754.1 also provides sample code for testing, and can be used to troubleshoot missing setups or frequently experienced issues. Take advantage and run a test ahead of time with the sample code to isolate any issues from within business specific subscription code.Get more out of Oracle Product Hub by using Business Events!

    Read the article

  • I need advice on laptop purchase for university [closed]

    - by Systemic33
    I'm currently in University studying Computer Science/IT/Information Technology. And this first year i've managed to do with the laptop I had; an ASUS Eee PC 1000H with a 10.1" screen. But this is getting way too underpowered and small for programming more than just quick programming introduction excercises. So I'm looking to buy a more suitable laptop. It's not supposed to be a desktop replacement though, since I've got a pretty good desktop already with a 24" monitor. So the kinda laptop I want to buy is one suited for university. If this bears any significance, I'm working in Java atm, but I will likely work with lots of other things incl. web development. I'm looking to spend about $1700 plus/minus. And it should be powerful/big enough for working on programming projects as well as the usual university stuff like MATLAB, Maple, etc out "in the field", and sometimes for maybe a week when visiting my parents. What I'm looking at right now is the ASUS Zenbook UX31A with the 1920 x 1080 resolution on 13.3" IPS display. But I'm kinda nervous that this will be too petite for programming. In essence i'm looking for a powerfull computer, that has good enough battery, and looks good. I would love suggestions or any type of feedback, either with maybe a better choice, or input on how its like programming on 13" laptops. Very much thanks in advance for anyone who even went through all that! PS. I don't want a mac, or my inner karma would commit Seppuku xD But experiences from working on the 13" Macbook Air would kinda be equivalent to the Zenbook i'm considering, so I would love to hear that. tl;dr The quick brown fox jumps over the lazy dog ;)

    Read the article

  • EJB Persist On Master Child Relationship

    - by deepak.siddappa(at)oracle.com
    Let us take scenario where in users wants to persist master child relationship. Here will have two tables dept, emp (using Scott Schema) which are having master child relation.Model Diagram: Here in the above model diagram, Dept is the Master table and Emp is child table and Dept is related to emp by one to n relationship. Lets assume we need to make new entries in emp table using EJB persist method. Create a Emp form manually dropping the fields, where deptno will be dropped as Single Selection -> ADF Select One Choice (which is a foreign key in emp table) from deptFindAll DC. Make sure to bind all field variables in backing bean.Employee Form:Once the Emp form created, If the persistEmp() method is used to commit the record this will persist all the Emp fields into emp table except deptno, because the deptno will be passed as a Object reference in persistEmp method  (Its foreign key reference). So directly deptno can't be passed to the persistEmp method instead deptno should be explicitly set to the emp object, then the persist will save the deptno to the emp table.Below solution is one way of work around to achieve this scenario -Create a method in sessionBean for adding emp records and expose this method in DataControl.     For Ex: Here in the below code 'em" is a EntityManager.            private EntityManager em - will be member variable in sessionEJBBeanpublic void addEmpRecord(String ename, String job, BigDecimal deptno) { Emp emp = new Emp(); emp.setEname(ename); emp.setJob(job); //setting the deptno explicitly Dept dept = new Dept(); dept.setDeptno(deptno); //passing the dept object emp.setDept(dept); //persist the emp object data to Emp table em.persist(emp); }From DataControl palette Drop addEmpRecord as Method ADF button, In Edit action binding window enter the parameter values which are binded in backing bean.     For Ex:     If the name deptno textfield is binded with "deptno" variable in backing bean, then El Expression Builder pass value as "#{backingbean.deptno.value}"Binding:

    Read the article

  • What does your Lisp workflow look like?

    - by Duncan Bayne
    I'm learning Lisp at the moment, coming from a language progression that is Locomotive BASIC - Z80 Assembler - Pascal - C - Perl - C# - Ruby. My approach is to simultaneously: write a simple web-scraper using SBCL, QuickLisp, closure-html, and drakma watch the SICP lectures I think this is working well; I'm developing good 'Lisp goggles', in that I can now read Lisp reasonably easily. I'm also getting a feel for how the Lisp ecosystem works, e.g. Quicklisp for dependencies. What I'm really missing, though, is a sense of how a seasoned Lisper actually works. When I'm coding for .NET, I have Visual Studio set up with ReSharper and VisualSVN. I write tests, I implement, I refactor, I commit. Then when I'm done enough of that to complete a story, I write some AUATs. Then I kick off a Release build on TeamCity to push the new functionality out to the customer for testing & hopefully approval. If it's an app that needs an installer, I use either WiX or InnoSetup, obviously building the installer through the CI system. So, my question is: as an experienced Lisper, what does your workflow look like? Do you work mostly in the REPL, or in the editor? How do you do unit tests? Continuous integration? Packaging & deployment? When you sit down at your desk, steaming mug of coffee to one side and a framed photo of John McCarthy to the other, what is it that you do? Currently, I feel like I am getting to grips with Lisp coding, but not Lisp development ...

    Read the article

  • At which point is a continuous integration server interesting?

    - by Cedric Martin
    I've been reading a bit about CI servers like Jenkins and I'm wondering: at which point is it useful? Because surely for a tiny project where you'd have only 5 classes and 10 unit tests, there's no real need. Here we've got about 1500 unit tests and they pass (on old Core 2 Duo workstations) in about 90 seconds (because they're really testing "units" and hence are very fast). The rule we have is that we cannot commit code when a test fail. So each developers launches all his tests to prevent regression. Obviously, because all the developers always launch all the test we catch errors due to conflicting changes as soon as one developer pulls the change of another (when any). It's still not very clear to me: should I set up a CI server like Jenkins? What would it bring? Is it just useful for the speed gain? (not an issue in our case) Is it useful because old builds can be recreated? (but we can do this to with Mercurial, by checking out old revs) Basically I understand it can be useful but I fail to see exactly why. Any explanation taking into account the points I raised above would be most welcome.

    Read the article

  • Why the recent shift to removing/omitting semicolons from Javascript?

    - by Jonathan
    It seems to be fashionable recently to omit semicolons from Javascript. There was a blog post a few years ago emphasising that in Javascript, semicolons are optional and the gist of the post seemed to be that you shouldn't bother with them because they're unnecessary. The post, widely cited, doesn't give any compelling reasons not to use them, just that leaving them out has few side-effects. Even GitHub has jumped on the no-semicolon bandwagon, requiring their omission in any internally-developed code, and a recent commit to the zepto.js project by its maintainer has removed all semicolons from the codebase. His chief justifications were: it's a matter of preference for his team; less typing Are there other good reasons to leave them out? Frankly I can see no reason to omit them, and certainly no reason to go back over code to erase them. It also goes against (years of) recommended practice, which I don't really buy the "cargo cult" argument for. So, why all the recent semicolon-hate? Is there a shortage looming? Or is this just the latest Javascript fad?

    Read the article

  • What does your Lisp workflow look like?

    - by Duncan Bayne
    I'm learning Lisp at the moment, coming from a language progression that is Locomotive BASIC - Z80 Assembler - Pascal - C - Perl - C# - Ruby. My approach is to simultaneously: write a simple web-scraper using SBCL, QuickLisp, closure-html, and drakma watch the SICP lectures I think this is working well; I'm developing good 'Lisp goggles', in that I can now read Lisp reasonably easily. I'm also getting a feel for how the Lisp ecosystem works, e.g. Quicklisp for dependencies. What I'm really missing, though, is a sense of how a seasoned Lisper actually works. When I'm coding for .NET, I have Visual Studio set up with ReSharper and VisualSVN. I write tests, I implement, I refactor, I commit. Then when I'm done enough of that to complete a story, I write some AUATs. Then I kick off a Release build on TeamCity to push the new functionality out to the customer for testing & hopefully approval. If it's an app that needs an installer, I use either WiX or InnoSetup, obviously building the installer through the CI system. So, my question is: as an experienced Lisper, what does your workflow look like? Do you work mostly in the REPL, or in the editor? How do you do unit tests? Continuous integration? Packaging & deployment? When you sit down at your desk, steaming mug of coffee to one side and a framed photo of John McCarthy to the other, what is it that you do? Currently, I feel like I am getting to grips with Lisp coding, but not Lisp development ...

    Read the article

  • /etc/resolv.conf nameserver fd00::1

    - by user88631
    My /etc/resolv.conf constantly get a mysterious entry, i run a home network with ipv6 provided by ravd, the interface is auto-configured by Network manager (all name server lookups are lost when this line is first in my /etc/resolv.conf) . Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) **# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN** nameserver fd00::1 nameserver 192.168.1.1 search home.int When ping is working cat /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 192.168.1.1 search home.int So something is putting fd00::1 at start of file, not if I ping6 fd00::1 I get Destination unreachable: Administratively prohibited To diagnose this I ran the router with single cable to connected to ubuntu machine. Ran tcpdump + restarted network on ubuntu. "tcpdump ip6 -e -i eth0 | grep fd00" finds nothing, it's not being advertised via the network.. The only hit I got was when an upstream router refused a connection attempt from the ubuntu machine to fd00::1. I have also switched on debug for network manager & it appears to set the mystery line.. 15:22:14 storage-pc NetworkManager[349]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) complete. 15:22:14 storage-pc NetworkManager[349]: <warn> dnsmasq exited with error: Other problem (5) 15:22:14 storage-pc NetworkManager[349]: <debug> [1346822534.281528] [nm-dns-manager.c:598] update_dns(): updating resolv.conf 15:22:14 storage-pc NetworkManager[349]: <debug> [1346822534.281875] [nm-dns-manager.c:719] update_dns(): DNS: plugin dnsmasq ignored (caching disabled) 15:22:14 storage-pc NetworkManager[349]: <info> ((null)): writing resolv.conf to /sbin/resolvconf 15:22:14 storage-pc dbus[2184]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' 15:22:14 storage-pc dnsmasq[2875]: reading /etc/resolv.conf 15:22:14 storage-pc dnsmasq[2875]: using nameserver 192.168.1.1#53 15:22:14 storage-pc dnsmasq[2875]: using nameserver fd00::1#53 Any suggestions on how to find out where this comes from?

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >