Search Results

Search found 2601 results on 105 pages for 'commit'.

Page 61/105 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Developing momentum on open source projects

    - by sashang
    Hi I've been struggling to develop momentum contributing to open source projects. I have in the past tried with gcc and contributed a fix to libstdc++ but it was a once off and even though I spent months in my spare time on the dev mailing list and reading through things I just never seemed to develop any momentum with the code. Eventually I unsubscribed and got my free time back and uncluttered my mailbox. Like a lot of people I have some little open source defunct projects lying around on the net, but they're not large and I'm the only contributor. At the moment I'm more interested in contributing to a large open source project and want to know how people got started because I find it difficult while working full time to develop any momentum with the code base. Other more regular contributors, who are on the project full-time, are able to make changes at will and as result enter that positive feedback cycle where they understand the code and also know where it's heading. It makes the barrier to entry higher for those that come along later. My questions are to people who actively contribute to large opensource projects, like the Linux kernel, or gcc or clang/llvm or anything else with say a developer head count of more than 10. How did you get started? Was there a large chunk of time in your life that you just could dedicate to working on the project? I know in Linus's case he had a chunk of time (6 months) to get it started. What barriers to entry did you encounter? Can you describe the initial stages of the time spent with the project, from when you had little understanding of the code to when you understood enough to commit regularly. Thanks

    Read the article

  • design for supporting entities with images

    - by brainydexter
    I have multiple entities like Hotels, Destination Cities etc which can contain images. The way I have my system setup right now is, I think of all the images belonging to this universal set (a table in the DB contains filePaths to all the images). When I have to add an image to an entity, I see if the entity exists in this universal set of images. If it exists, attach the reference to this image, else create a new image. E.g.: class ImageEntityHibernateDAO { public void addImageToEntity(IContainImage entity, String filePath, String title, String altText) { ImageEntity image = this.getImage(filePath); if (image == null) image = new ImageEntity(filePath, title, altText); getSession().beginTransaction(); entity.getImages().add(image); getSession().getTransaction().commit(); } } My question is: Earlier I had to write this code for each entity (and each entity would have a Set collection). So, instead of re-writing the same code, I created the following interface: public interface IContainImage { Set<ImageEntity> getImages(); } Entities which have image collections also implements IContainImage interface. Now, for any entity that needs to support adding Image functionality, all I have to invoke from the DAO looks something like this: // in DestinationDAO::addImageToDestination { imageDao.addImageToEntity(destination, imageFileName, imageTitle, imageAltText); // in HotelDAO::addImageToHotel { imageDao.addImageToEntity(hotel, imageFileName, imageTitle, imageAltText); It'd be great help if someone can provide me some critique on this design ? Are there any serious flaws that I'm not seeing right away ?

    Read the article

  • Eclipse Java Code Formatter in NetBeans Plugin Manager

    - by Geertjan
    Great news for Eclipse refugees everywhere. Benno Markiewicz forked the Eclipse formatter plugin that I blogged about sometime ago (here and here)... and he fixed many bugs, while also adding new features. It's a handy plugin when you're (a) switching from Eclipse to NetBeans and want to continue using your old formatting rules and (b) working in a polyglot IDE team, i.e., now the formatting rules defined in Eclipse can be imported into NetBeans IDE and everyone will happily be able to conform to the same set of formatting standards. And now you can get it directly from Tools | Plugins in NetBeans IDE 7.4: News from Benno on the plugin, received from him today: The plugin is verified by the NetBeans community and available in the Plugin Manager in NetBeans IDE 7.4 (as shown above) and also at the NetBeans Plugin Portal here, where you can also read quite some info about the plugin:  http://plugins.netbeans.org/plugin/50877/eclipse-code-formatter-for-java The issue with empty undo buffer was solved with the help of junichi11: https://github.com/markiewb/eclipsecodeformatter_for_netbeans/issues/18 The issue with the lost breakpoints remains unsolved and there was no further feedback. That is the main reason why the save action isn't activated by default. See also the open known issues at https://github.com/markiewb/eclipsecodeformatter_for_netbeans/issues?state=open Features are as follows:  Global configuration and project specific configuration.  On save action, which is disabled by default. Show the used formatter as a notification, which is enabled by default.  Finally, Benno testifies to the usefulness, stability, and reliability of the plugin: I use the Eclipse formatter provided by this plugin every day at work. Before I commit, I format the sources. It works and that's it. I am pleased with it. Here's where the Eclipse formatter is defined globally in Tools | Options: And here is per-project configuration, i.e., use the Project Properties dialog of any project to override the global settings:  Interested to hear from anyone who tries the plugin and has any feedback of any kind! 

    Read the article

  • .NET Dependency Management Systems

    - by StriplingWarrior
    I have some .NET projects that are starting to get large enough to merit looking into Dependency Management solutions, so we don't have to copy binaries from one project to another. Here's what I've found so far: NPanday is based on a port of Maven. I can't tell how recently it was worked on, but the last release was in May 2011. NuGet seems to be under active development, and it appears to have support directly from Microsoft. Some people complained that it "only addresses dependency resolution," but I don't know what else it should address, or whether it has added more features since that point. It does appear to have recently added the ability to import binaries as part of the build process so we don't have to commit them to our repositories. Refix appears to still be in Beta, after having received no attention since Sept 2011. Would somebody with recent experience using any of these dependency management tools (or any others that work well) share your experience? Is NuGet mature enough to use it for dependency management? If not, what does it lack?

    Read the article

  • Steps to send patch to Launchpad

    - by Alois Mahdal
    With a Git/Github background and knowing very little about Bazaar VCS, I would like to occasionally report a bug to Launchpad and even send a patch. I'd like to do it in a "proper" way so that it's ready for merging or improvement while not getting in way. I can't seem to find a decent simple How-to suited for my needs. So what I did so far: I have created a Launchpad account, reported the bug, installed Bazaar and setup SSH keys etc. Now if it was Github, I'd fork the repo, clone the forked repo, create a sanely named branch and do the work, commit + push, create a pull request using Github WUI. But it's not Github, and both LP and Bazaar architectures seem quite different from their Github/Git cunterparts. So could a kind soul save me from drowning in tons of documents and complile a straightforward step path, mainly the second part? Possibly including relevant CLI commands when they are needed? Edit: It seems that I should clarify if I'm asking specifically about Ubuntu packages (whatever it means) or Launchpad packages. I don't really care much about distinction between Ubuntu packages and non-Ubuntu packages. Any software could be in Ubuntu today and out of it tomorrow, or vice-versa. The development is what matters much more than distribution. Ao I was assuming that not every single package distributed in Ubuntu is hosted on Launchpad, an "official" or "default" workflow for Launchpad exists (well if all devs can agree on using Bazaar, why couldn't most of them agree on a patching workflow?), so I'm asking about the Launchpad way, not the Ubuntu way. And I chose AU because since the intersection is vast, I guess it's pretty on topic here.

    Read the article

  • Tomcat + Spring + CI workflow

    - by ex3v
    We're starting our very first project with Spring and java web stack. This project will be mainly about rewriting quite large ERP/CRM from Zend Framework to Java. Important factor in my question is that I come from php territory, where things (in terms of quality) tend to look different than in java world. Fatcs: there will be 2-3 developers, at least one of developers uses Windows, rest uses Linux, there is one remote linux-based machine, which should handle test and production instances, after struggling with buggy legacy code, we want to introduce good programming and development practices (CI, tests, clean code and so on) client: internal, frequent business logic changes, scrum, daily deployments What I want to achieve is good workflow on as many development stages as possible (coding - commiting - testing - deploying). The problem is that I've never done this before, so I don't know what are best practices to do this. What I have so far is: developers code locally, there is vagrant instance on every development machine, managed by puppet. It contains the same linux, jenkins and tomcat versions as production machine, while coding, developer deploys to vagrant machine, after local merge to test branch, jenkins on vagrant handles tests, when everything is fine, developer pushes commits and merges jenkins on remote machine pulls commit from test branch, runs tests and so on, if everything looks green, jenkins deploys to test tomcat instance Deployment to production is manual (altough it can be done using helping scripts) when business logic is tested by other divisions and everything looks fine to client. Now, the real question: does above make any sense? Things that I'm not sure about: Remote machine: won't there be any problems with two (or even three, as jenkins might need one) instances of same app on tomcat? Using vagrant to develop on php environment is just vise. Isn't this overkill while using Tomcat? I mean, is there higher probability that tomcat will act the same on every machine? Is there sense of having local jenkins on vagrant?

    Read the article

  • Latest update to Ubuntu 13.10 broke Intel graphics drivers

    - by James Davies
    I'm running a copy of Ubuntu 13.10 on an i7-4771 w/ Intel HD4600 Graphics using a Dell Ultrasharp 1440p monitor via Displayport. Up until today this configuration has been working perfectly, however the latest update appears to have broken my graphics configuration, and xorg is now refusing to go above 1280p resolution. Running xrandr it appears the driver incorrectly thinks my monitor is plugged into the HDMI port and is detecting a max resolution of 1920x1200 instead of 2560x1440. (It's actually plugged in via Displayport). Based on the apt history.log, the latest update was for the kernel. I'm presuming the issue is that the official Intel driver hasn't been updated to support this version? Is there any way to resolve this, or will I need to upgrade to 14.10 to get the latest driver from Intel? Start-Date: 2014-05-28 11:30:57 Commandline: aptdaemon role='role-commit-packages' sender=':1.473' Install: linux-image-extra-3.11.0-22-generic:amd64 (3.11.0-22.38), linux-image-3.11.0-22-generic:amd64 (3.11.0-22.38), linux-headers-3.11.0-22:amd64 (3.11.0-22.38), linux-headers-3.11.0-22-generic:amd64 (3.11.0-22.38)

    Read the article

  • VCS strategy with TeamCity and CI

    - by Luke Puplett
    I'm planning a strategy which seeks to allow automated deployment of a website codebase into QA and production on check-in. We're using the fabulous TeamCity. We want to control release to live production; i.e. not have every check-in on Trunk go live. So my plan is to use Trunk as QA. Committing to Trunk triggers deployment to QA. I will then have a Production branch which also triggers deployment on commit, to the live site. The idea is simply that Trunk represents the mainline codebase but it hasn't gone live yet. We can branch features and do daily pulls from Trunk into those feature branches as per normal and merge/re-integrate into Trunk when we're happy for it to go to QA. When the BAs give the nod, we then smash a bottle of champagne and merge Trunk to Production and out she goes. I've never seen it done like this. Other greenfield CI strategies involve hiding features and code from production via config - this codebase can't cope with that - or just having CI on QA and taking cuts and manually pushing to live. Does my plan sound alright?

    Read the article

  • Set UFW before.rules without restart of server

    - by enedene
    I use UFW on my Ubuntu server. Unfortunately there are no rules in UFW to port forward to another machine. What you need to do is edit /etc/before.rules and put routing commands there, for example # nat Table rules *nat :POSTROUTING ACCEPT [0:0] # Forward traffic from eth0 through eth1. -A POSTROUTING -s 192.168.0.0/24 -o eth1 -j MASQUERADE -A PREROUTING -i eth1 -p tcp --dport 80 -j DNAT --to 192.168.0.200:80 -A PREROUTING -i eth1 -p udp --dport 10090 -j DNAT --to 192.168.0.202:22 -A PREROUTING -i eth1 -p tcp --dport 10090 -j DNAT --to 192.168.0.202:22 -A PREROUTING -i eth1 -p tcp --dport 443 -j DNAT --to 192.168.0.200:443 -A PREROUTING -i eth1 -p udp --dport 443 -j DNAT --to 192.168.0.200:443 -A PREROUTING -i eth1 -p tcp --dport 57626 -j DNAT --to 192.168.0.2:57626 -A PREROUTING -i eth1 -p udp --dport 57626 -j DNAT --to 192.168.0.2:57626 -A PREROUTING -i eth1 -p tcp --dport 3306 -j DNAT --to 192.168.0.200:3306 -A PREROUTING -i eth1 -p udp --dport 3306 -j DNAT --to 192.168.0.200:3306 COMMIT My problem is that I can't find a way to run new forwarding rules without restarting the server, which I hate to do very much. So please help me, is there a way?

    Read the article

  • At the end of my rope

    - by hvgotcodes
    I am a contractor to a big company. Currently, there are three developers on the project, myself included. The problem is the other 2 developers don't really get it. By "it" i mean the following: They don't understand the best practices for the technology we are using. After 6 months of me and others giving them examples there are terrible anti-patterns being used. They are "copy and paste" programmers that produce primarily spaghetti code. They constantly break things, implementing changes but not doing a basic smoke test to see if all is good They refuse/rarely to ask for code-reviews. They refuse/rarely even do basic things like formatting code. No documentation on any classes (jsdocs) Afraid to delete code that doesn't do anything Leave commented code blocks everywhere even though we have version control. I find myself getting more and more frustrated as I format others code, fix bugs, discover functionality that is broken, and create abstractions to remove the spaghetti. I really don't know what to do. I try not to get to frustrated, but it's just a mess. I like these people as people, but I feel like the coding situation is so bad that I could move faster if they simply browsed the web all day. Would it be out of line to ask our manager to review the others svn commit access; commits can only be done after a review by someone who is knowledgeable in what we are doing? As a contractor, I'm not sure if that's the best move. Is there a subtle/not so subtle way of making it clear how many things I am fixing?

    Read the article

  • Trying to build/install patched gtk3-engines-oxygen to test bugfix, get shared changelog.Debian.gz is different from other instances of package

    - by andlabs
    I want to just quickly test the patch in this bug report to gtk3-engines-oxygen so it can go upstream. I could test it either temporarily or permanently; I would just like to do it. I currently have the package installed. So far, I've tried: $ mkdir /tmp/o # keep everything self-contained $ cd /tmp/o $ apt-get source gtk3-engines-oxygen $ cd oxygen-gtk3-1.3.5/ $ patch -p1 < /path/to/patchfile $ dpkg-source --commit # to make debuild happy (name 'layout'; just save the default; this is a test) $ debuild -us -uc # bypass signature checks $ sudo debi ../oxygen-gtk3_1.3.5-0ubuntu1_amd64.changes According to some people on #ubuntu-packaging, this is what I have to do. It's this last step that's the problem; I'm getting (Reading database ... 503333 files and directories currently installed.) Preparing to unpack gtk3-engines-oxygen_1.3.5-0ubuntu1_amd64.deb ... Unpacking gtk3-engines-oxygen:amd64 (1.3.5-0ubuntu1) over (1.3.5-0ubuntu1) ... dpkg: error processing archive gtk3-engines-oxygen_1.3.5-0ubuntu1_amd64.deb (--install): trying to overwrite shared '/usr/share/doc/gtk3-engines-oxygen/changelog.Debian.gz', which is different from other instances of package gtk3-engines-oxygen:amd64 Errors were encountered while processing: gtk3-engines-oxygen_1.3.5-0ubuntu1_amd64.deb debi: debpkg -i failed What's going on? How do I fix it? Or am I doing this completely wrong (and ergo so are they)? I'm using Kubuntu 14.04 amd64. Thanks.

    Read the article

  • Reformatting and version control

    - by l0b0
    Code formatting matters. Even indentation matters. And consistency is more important than minor improvements. But projects usually don't have a clear, complete, verifiable and enforced style guide from day 1, and major improvements may arrive any day. Maybe you find that SELECT id, name, address FROM persons JOIN addresses ON persons.id = addresses.person_id; could be better written as / is better written than SELECT persons.id, persons.name, addresses.address FROM persons JOIN addresses ON persons.id = addresses.person_id; while working on adding more columns to the query. Maybe this is the most complex of all four queries in your code, or a trivial query among thousands. No matter how difficult the transition, you decide it's worth it. But how do you track code changes across major formatting changes? You could just give up and say "this is the point where we start again", or you could reformat all queries in the entire repository history. If you're using a distributed version control system like Git you can revert to the first commit ever, and reformat your way from there to the current state. But it's a lot of work, and everyone else would have to pause work (or be prepared for the mother of all merges) while it's going on. Is there a better way to change history which gives the best of all results: Same style in all commits Minimal merge work ? To clarify, this is not about best practices when starting the project, but rather what should be done when a large refactoring has been deemed a Good Thing™ but you still want a traceable history? Never rewriting history is great if it's the only way to ensure that your versions always work the same, but what about the developer benefits of a clean rewrite? Especially if you have ways (tests, syntax definitions or an identical binary after compilation) to ensure that the rewritten version works exactly the same way as the original?

    Read the article

  • Developing my momentum on open source projects

    - by sashang
    Hi I've been struggling to develop momentum contributing to open source projects. I have in the past tried with gcc and contributed a fix to libstdc++ but it was a once off and even though I spent months in my spare time on the dev mailing list and reading through things I just never seemed to develop any momentum with the code. Eventually I unsubscribed and got my free time back and uncluttered my mailbox. Like a lot of people I have some little open source defunct projects lying around on the net, but they're not large and I'm the only contributor. At the moment I'm more interested in contributing to a large open source project and want to know how people got started because I find it difficult while working full time to develop any momentum with the code base. Other more regular contributors, who are on the project full-time, are able to make changes at will and as result enter that positive feedback cycle where they understand the code and also know where it's heading. It makes the barrier to entry higher for those that come along later. My questions are to people who actively contribute to large opensource projects, like the Linux kernel, or gcc or clang/llvm or anything else with say a developer head count of more than 10. How did you get started? Was there a large chunk of time in your life that you just could dedicate to working on the project? I know in Linus's case he had a chunk of time (6 months) to get it started. What barriers to entry did you encounter? Can you describe the initial stages of the time spent with the project, from when you had little understanding of the code to when you understood enough to commit regularly. Thanks

    Read the article

  • What's the canonical process for backing up a website?

    - by Walkerneo
    This is going to sound terrible, but bear with me. I currently have a cron job that does a mysql dump, a git add all and commit, and a git push to bitbucket. I set this up almost a year ago, when I didn't know much about git, backups, and general web development and administration. I haven't had the time to fix this and do it properly, but the repo has now grown quite big from accumulating large temporary files from my forum, so now I have to do something and I want to do it properly this time around. What processes do semi-large websites and personal site admins use for backing up server content? Based on what I've learned since I set this up, what I'm currently think of doing is: Making changes on a development domain and committing the code frequently Archiving the entire site after a successful deployment from the development domain Having automatic daily database and user-content backups. I still like the idea of backing up sqldumps with git, though. I know git isn't a backup tool and that this is beyond its purpose, but the textual queries that are exported would be easily managed by git and would save a lot of space in archives.

    Read the article

  • Is Version control with GIT useful to work in small projects fon an individual developer? [closed]

    - by chefnelone
    I work as website developer. I develop with Drupal CSM. I have a drupal base installation which has some settings which are sort of default for all my proyects. This drupal installation is my drupal-base folder Every time I start a new project I just duplicate the `drupal-base- folder and start coding the new features I need for the new proyect. The problem is that sometimes I work in more than one projects at the same time and I get a new feature in one of the project that I'd like to commit to my drupal base installation and also to the other projects. Then keeping the sync of all this is nightmare. I thought that Version Control with GIT could help me with this and I went into a tutorial about it. But now I'm not sure if this will be usefull for me. Then my question is: I think that GIT is just usefull for big projects where a team is working all together in the same files. But it is not usefull to work in small and individual projects. Am I right?

    Read the article

  • Are there any good examples of open source C# projects with a large number of refactorings?

    - by Arjen Kruithof
    I'm doing research into software evolution and C#/.NET, specifically on identifying refactorings from changesets, so I'm looking for a suitable (XP-like) project that may serve as a test subject for extracting refactorings from version control history. Which open source C# projects have undergone large (number of) refactorings? Criteria A suitable project has its change history publicly available, has compilable code at most commits and at least several refactorings applied in the past. It does not have to be well-known, and the code quality or number of bugs is irrelevant. Preferably the code is in a Git or SVN repository. The result of this research will be a tool that automatically creates informative, concise comments for a changeset. This should improve on the common development practice of just not leaving any comments at all. EDIT: As Peter argues, ideally all commit comments would be teleological (goal-oriented). Practically, if a comment is made at all it is often descriptive, merely a summary of the changes. Sadly we're a long way from automatically inferring developer intentions!

    Read the article

  • Ubuntu 12.10 won't display properly after kernel upgrade

    - by Daniel
    After updating a system today, Ubuntu's doesn't display correctly. The desktop now looks like this. It was working properly before. I had to use the terminal to run synaptic package manager, so I could view the update history; which is as follows: Commit Log for Wed Nov 7 11:50:36 2012 Upgraded the following packages: linux-image-generic (3.5.0.17.19) to 3.5.0.18.21 Installed the following packages: linux-image-3.5.0-18-generic (3.5.0-18.29) linux-image-extra-3.5.0-18-generic (3.5.0-18.29) Prior to this issue, the last active driver was nvidia-current-updates, version 304.51. I tried using the nvidia-current driver, version 304.51.really.304.43 instead, but the problem persists. I tried running nvidia-settings from terminal, so I could try configuring something, but the application informs that the Nvidia driver is not being used. As the x-swat repository has nothing for Quantal, I desperately used the unstable x-edgers repository & upgraded, but to no avail; so I purged it. The display should normally be full HD, but the only available resolutions now are 1024x768(4:3) and 800x600(4:3). The system is Dell XPS-L702X, with NVIDIA GeForce GT 555M, and 17" screen. How can I fix this problem? Update: I tried using the Nouveau third-party driver & this fixes the issue. However, if you have any idea how to get the Nvidia drivers working properly with the latest kernel, please share; as I've noticed some videos playing very slowly on the system, though I'm not sure exactly why.

    Read the article

  • Authenticate with Django 1.5

    - by gorjuce
    I'm currently testing django 1.5 and a custom User model, but I've some problems. I've created a User class in my account app, which looks like: class User(AbstractBaseUser): email = models.EmailField() activation_key = models.CharField(max_length=255) is_active = models.BooleanField(default=False) is_admin = models.BooleanField(default=False) USERNAME_FIELD = 'email' I can correctly register a user, who is stored in my account_user table. Now, how can I log in? I've tried with: def login(request): form = AuthenticationForm() if request.method == 'POST': form = AuthenticationForm(request.POST) email = request.POST['username'] password = request.POST['password'] user = authenticate(username=email, password=password) if user is not None: if user.is_active: login(user) else: message = 'disabled account, check validation email' return render( request, 'account-login-failed.html', {'message': message} ) return render(request, 'account-login.html', {'form': form}) I can correctly register a new User My forms.py which contains my register form class RegisterForm(forms.ModelForm): """ a form to create user""" password = forms.CharField( label="Password", widget=forms.PasswordInput() ) password_confirm = forms.CharField( label="Password Repeat", widget=forms.PasswordInput() ) class Meta: model = User exclude = ('last_login', 'activation_key') def clean_password_confirm(self): password = self.cleaned_data.get("password") password_confirm = self.cleaned_data.get("password_confirm") if password and password_confirm and password != password_confirm: raise forms.ValidationError("Password don't math") return password_confirm def clean_email(self): if User.objects.filter(email__iexact=self.cleaned_data.get("email")): raise forms.ValidationError("email already exists") return self.cleaned_data['email'] def save(self): user = super(RegisterForm, self).save(commit=False) user.password = self.cleaned_data['password'] user.activation_key = generate_sha1(user.email) user.save() return user My question is: Why does authenticate give me None? I know I'm trying to authenticate() with an email as username but is that not one of the reasons to use a custom User model?

    Read the article

  • Information I need to know as a Java Developer [on hold]

    - by Woy
    I'm a java developer. I'm trying to get more knowledge to become a better programmer. I've listed a number of technologies to learn. Instead of what I've listed, what technologies would you suggest to learn as well for a Junior Java Developer? I realize, there's a lot of things to study. Java: - how a garbage collector works - resource management - network programming - TCP/IP HTTP - transactions, - consistency: interfaces, classes collections, hash codes, algorithms, comp. complexity concurrent programming: synchronizing, semafores steam management metability: thread-safety byte code manipulations, reflections, Aspect-Oriented Programming as base to understand frameworks such as Spring etc. Web stack: servlets, filters, socket programming Libraries: JDK, GWT, Apache Commons, Joda-Time, Dependency Injections: Spring, Nano Tools: IDE: very good knowledge - debugger - profiler - web analyzers: Wireshark, firebugs - unit testing SQL/Databases: Basics SELECTing columns from a table Aggregates Part 1: COUNT, SUM, MAX/MIN Aggregates Part 2: DISTINCT, GROUP BY, HAVING + Intermediate JOINs, ANSI-89 and ANSI-92 syntax + UNION vs UNION ALL x NULL handling: COALESCE & Native NULL handling Subqueries: IN, EXISTS, and inline views Subqueries: Correlated ITH syntax: Subquery Factoring/CTE Views Advanced Topics Functions, Stored Procedures, Packages Pivoting data: CASE & PIVOT syntax Hierarchical Queries Cursors: Implicit and Explicit Triggers Dynamic SQL Materialized Views Query Optimization: Indexes Query Optimization: Explain Plans Query Optimization: Profiling Data Modelling: Normal Forms, 1 through 3 Data Modelling: Primary & Foreign Keys Data Modelling: Table Constraints Data Modelling: Link/Corrollary Tables Full Text Searching XML Isolation Levels Entity Relationship Diagrams (ERDs), Logical and Physical Transactions: COMMIT, ROLLBACK, Error Handling

    Read the article

  • Do you do custom tests/work for a potential employer before the interview?

    - by Chuck Stephanski
    Every shop I've worked at we have followed what I thought was a standard hiring sequence: 1. solicit resumes 2. phone screen applicants we are interested in 3. in person interview 4. in some shops 2nd in person interview (just w/ CEO for example) Most places I apply to follow something along these lines. But some shops want to give me a test or ask me to build something after I send in my resume. They usually say "congratulations, you've made it to stage 2 of our hiring process!" But for all I know they are asking every applicant they get a resume from to do the same thing. This annoys me because it doesn't scale. If I'm applying for a lot of positions I can't spend all my time doing work that potentially can only land me one job. Plus there's no shared dedication to the process. If we do a 45 minute phone screen, that's 45 minutes the company commits to me and 45 minutes I commit to the company in hopes of a potential match. Do you guys have a policy regarding these sorts of things?

    Read the article

  • Writing generic code when your target is a C compiler

    - by enobayram
    I need to write some algorithms for a PIC micro controller. AFAIK, the official tools support either assembler or a subset of C. My goal is to write the algorithms in a generic and reusable way without losing any runtime or memory performance. And if possible, I would like to do this without increasing the development time much and compromising the readability and maintainability much either. What I mean by generic and reusable is that I don't want to commit to types, array sizes, number of bits in a bit field etc. All these specifications, IMHO, point to C++ templates, but there's no compiler for it for my target. C macro metaprogramming is another option, but, again my opinion, that greatly reduces readability and increases development time. I believe what I'm looking for is a decent C++ to C translator, but I'd like to hear anything else that satisfies the above requirements. Maybe a translator from another high-level language to C that produces very efficient code, maybe something else. Please note that I have nothing against C, I just wish templates were available in it.

    Read the article

  • Using Queries with Coherence Read-Through Caches

    - by jpurdy
    Applications that rely on partial caches of databases, and use read-through to maintain those caches, have some trade-offs if queries are required. Coherence does not support push-down queries, so queries will apply only to data that currently exists in the cache. This is technically consistent with "read committed" semantics, but the potential absence of data may make the results so unintuitive as to be useless for most use cases (depending on how much of the database is held in cache). Alternatively, the application itself may manually "push down" queries to the database, either retrieving results equivalent to querying the cache directly, or may query the database for a key set and read the values from the cache (relying on read-through to handle any missing values). Obviously, if the result set is too large, reading through the cache may cause significant thrashing. It's also worth pointing out that if the cache is asynchronously synchronized with the database (perhaps via database change listener), that an application may commit a transaction to the database, then generate a key set from the database via a query, then read cache entries through the cache, possibly resulting in a race condition where the application sees older data than it had previously committed. In theory this is not problematic but in practice it is very unintuitive. For this reason it often makes sense to invalidate the cache when updating the database, forcing the next read-through to update the cache.

    Read the article

  • Release Notes for 3/2/2012

    Here are the notes for today’s release: Added a progress indicator when saving issues. Added support for viewing CodePlex RSS feeds in Chrome. Deployed several bug fixes: Fixed an issue where the back button on Internet Explorer was not working as intended when browsing code. Fixed an issue where long commit comments would push the source control info box outside of the boundaries of the page. Fixed an issue where Internet Explorer users were not able to widen the frame of the source code browser until a file was selected. Fixed an issue where opening a source code file directly from a URL in Internet Explorer would cause the source code tree to be collapsed. Fixed an issue where adding a code snippet with long lines of text to a discussion thread using Internet Explorer would needlessly display a vertical scrollbar, limiting the amount of code visible. Fixed an issue where tabbing through some links would render them invisible. We deprecated support for embedding PreEmptive analytics statistics on the project statistics page. If you’re interested in collecting and reporting your own statistics, PreEmptive’s RunTime Intelligence Endpoint Starter Kit offers a good starting point for capturing data. Have ideas on how to improve CodePlex? Visit our ideas page! Vote for your favorite ideas or submit a new one. Got Twitter? Follow us and keep apprised of the latest releases and service status at @codeplex.

    Read the article

  • Why does Ubuntu 12.10 Beta2 insist on commiting changes to the partition table?

    - by Uten
    Why does Ubuntu 12.10 Beta2 insist on commiting changes to the partition table even as no real changes has been done? This is a show stopper for me as I'm installing without a CD/DVD ROM. This is how I go about it. I downloaded the iso image and extracted vmlinuz and initrd.lz to the same folder I keep the iso image. Configured grub (0.9x) to boot /ubuntu/vmlinuz with the iso image like this: title ubuntu live-cd kernel /ubuntu/vmlinuz boot=casper iso-scan/filename=/ubuntu/ubuntu-12.10-beta2-desktop-i386.iso ro quiet splash initrd /ubuntu/initrd.lz boot This works well and I get a running livecd session. The iso image is mounted on /isomedia (or something similar). The spare HD space where I want to install Ubuntu is in the logical area (at the wery end of the disk). I have tried both to use the space as empty and preformated with ext4. After selecting the partition and selecting "use as ext4" and selecting a mountpoint (/) I get the message: "The installer needs to commit changes to partition tables, but cannot do so because partitions on the following mount points could not be unmounted" "/isomedia" (or something similar). Is this a "feature" of the installer? To insist that everything is unmounted even if no changes is nescesary (as fare as I understand). It's probably a safety feature but is it needed? I have cahnged layouts with parted and gparted (at the end of the disk) for years without any failures. I understand that booting the iso image like this is not the common way. But it is just such a beautifull way of doing it when you hav a running system and want to play with another. Any one had any success installing Ubuntu (12.10 beta2 ) like this? Best regards Uten

    Read the article

  • What is the fastest way for me to become a full stack developer? [on hold]

    - by user136368
    I run a small webdesign firm. I have an overview of HTML, CSS, JS, PHP(laravel), MySQL. I did a few courses on code academy. I wanted to build a web app in the company. I find that I am severely crippled by the lack of programming expertise. I want to become a full stack developer who can build a prototype on his own. I cannot spend 5-10k USD on the boot camp courses. Can someone suggest structured courses which can help me become a full stack developers? I found the following websites but I donot know if they can help me become one. My goal: Be able to make a working prototype of the ideas I come up with.(This is my primary goal. I do not want to be the lead developer. I just want to be able to make a prototype.) Several questions I have in mind: Will it be fine if I stick to PHP(laravel)? Should I be using ROR? I have come across a few online resources: Codeacademy, codeschool,teamtreehouse,and theodinproject. These are within my affordability range. I can commit to a 2-3 months intensively to learn programming. What do you suggest I do?

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >