Search Results

Search found 4369 results on 175 pages for 'merge tracking'.

Page 80/175 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • New JavaScript Editor

    - by Petr
    I did not write a blog post here for a few weeks. I think the last my post was  about releasing NetBeans 7.1 in the beginning of January. The reason is not that I would change the job:), but that I have concentrated on new JavaScript support/editor. The new JavaScript editor is written basically from scratch. The answer for the question "Why from beginning again, why do you just improve the old one?" is not easy and the decision has more aspects. One of the main reasons is that the old support was written 4 years ago and the architecture is limited. Also during the time, the APIs were changed and it was very hard to keep the editor up to date. Also there is a license issue etc. In short, it is time to rewrite the old JS editor.  We build up strong community about the PHP support in NetBeans and because many PHP developers also write JavaScript code I would like to ask you for a help. There is a continual PHP build with the new JavaScript support. You can download the result of the builds here. It's a zip file. You can unzip the file anywhere, where you want. I recommend to run the build with the new userdir, to avoid damaging your current userdir. It shouldn't happened, but just to be sure:). You can achieve this through the switch --userdir. So start the unzipped file from command line from the folder, where you unzipped it, can be done with this command on unix: bin/netbeans.sh --userdir /path/to/new/userdir and on windows: bin\netbeans.exe --userdir D:\path\to\new\userdir For the developers who use continual php build already, it's well known. There is also full IDE build with the new JavaScript support for people, who need more than only PHP support.  Because the builds with the new JavaScript editor is created from a branch, there are not nightly builds available. They will be, when we merge the branch to the trunk, but so far we have to work only with the mentioned continual build. We will merge our branch after branching NetBeans 7.2 from trunk. This is also answer for the question, what release of NetBeans will contain the new JS support. It should be the release after NetBeans 7.2. I'm asking you whether you could play with the builds or better, could work in the builds with new JavaScript support and tell us every issue that you run in. It can be everything what doesn't fit you, something doesn't work as you expected, something is slow, you want change the behaviour of a feature etc. Your input / comments are very important for us and it will help us to achieve the new JavaScript support that you need.  The best way how to communicate issues is through our Bugzilla, because it is simple to track them. Sure you can write comment here:), but still I prefer Bugzilla for any issue. You can click here (you should be already log in Bugzilla), a form for the new JavaScript issue is opened, with pre-filled component Editor and NO72 keyword. I will write about the single features later, but now I will mentioned a few features that should work in better way than in the old support.  Syntactic and semantic colouring Navigator Mark Occurrences and GoTo Declaration  Code Completion Code Completion is invoked through keyboard shortcut CTRL+SPACE. The first invocation offers items that are found through a source model. Almost all editor features are based on the model, that is build from source code. There is a lot of work on the model yet, but it should offer better results. When the pop up window with code completion items is open and you press CTRL+SPACE again, then the code completion offers all elements that are in the project. In the pictures all elements that starts with letter 't'. Formatter with many options and more :) A few features are not still implemented that are supported in the old JavaScript support (for example jQuery support), but we are adding this features ASAP.

    Read the article

  • Some Original Expressions

    - by Phil Factor
    Guest Editorial for Simple-Talk newsletterIn a guest editorial for the Simple-Talk Newsletter, Phil Factor wonders if we are still likely to find some more novel and unexpected ways of using the newer features of Transact SQL: or maybe in some features that have always been there! There can be a great deal of fun to be had in trying out recent features of SQL Expressions to see if  they provide new functionality.  It is surprisingly rare to find things that couldn’t be done before, but in a different   and more cumbersome way; but it is great to experiment or to read of someone else making that discovery.  One such recent feature is the ‘table value constructor’, or ‘VALUES constructor’, that managed to get into SQL Server 2008 from Standard SQL.  This allows you to create derived tables of up to 1000 rows neatly within select statements that consist of  lists of row values.  E.g. SELECT Old_Welsh, number FROM (VALUES ('Un',1),('Dou',2),('Tri',3),('Petuar',4),('Pimp',5),('Chwech',6),('Seith',7),('Wyth',8),('Nau',9),('Dec',10)) AS WelshWordsToTen (Old_Welsh, number) These values can be expressions that return single values, including, surprisingly, subqueries. You can use this device to create views, or in the USING clause of a MERGE statement. Joe Celko covered  this here and here.  It can become extraordinarily handy to use once one gets into the way of thinking in these terms, and I’ve rewritten a lot of routines to use the constructor, but the old way of using UNION can be used the same way, but is a little slower and more long-winded. The use of scalar SQL subqueries as an expression in a VALUES constructor, and then applied to a MERGE, has got me thinking. It looks very clever, but what use could one put it to? I haven’t seen anything yet that couldn’t be done almost as  simply in SQL Server 2000, but I’m hopeful that someone will come up with a way of solving a tricky problem, just in the same way that a freak of the XML syntax forever made the in-line  production of delimited lists from an expression easy, or that a weird XML pirouette could do an elegant  pivot-table rotation. It is in this sort of experimentation where the community of users can make a real contribution. The dissemination of techniques such as the Number, or Tally table, or the unconventional ways that the UPDATE statement can be used, has been rapid due to articles and blogs. However, there is plenty to be done to explore some of the less obvious features of Transact SQL. Even some of the features introduced into SQL Server 2000 are hardly well-known. Certain operations on data are still awkward to perform in Transact SQL, but we mustn’t, I think, be too ready to state that certain things can only be done in the application layer, or using a CLR routine. With the vast array of features in the product, and with the tools that surround it, I feel that there is generally a way of getting tricky things done. Or should we just stick to our lasts and push anything difficult out into procedural code? I’d love to know your views.

    Read the article

  • Cutting-Edge Demos Coming to Collaborate12

    - by mvaughan
    Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:Calibri; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} By Kathy Miedema, Oracle Applications User Experience Are you building your Collaborate 2012 agenda? Leave room for a stop at the demogrounds while you’re in Las Vegas from April 22-26. In addition to several presentations on the Oracle user experience, the Applications User Experience (UX) team will be on the demo grounds with a new eye-tracking tool, as well as demos that showcase new user experience designs. Check out our cutting-edge technology, which we use to obtain feedback that helps improve the user experience of Oracle applications, and see what our next-generation designs are in the HCM and FIN user experiences.  Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:Calibri; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Photo by Martin Taylor – Oracle Applications User Experience An Apps UX team member demonstrates what happens during an eye-tracking test. The dots on the screen show were test participants were looking and how long they spent at each point in the page. The UX team will also be staffing an on-site lab at Collaborate. At on-site labs, conference participants can sign up to join customer feedback sessions on several different kinds of work flow designs, from HCM to FIN to CRM to mobile. The feedback UX team members collect helps inform and fine-tune the user experiences being designed for next-generation applications. At Collaborate12, for example, user experience designs around Help and organizational charts will be tested for usability. The Apps UX team brings on-site labs to many major user group conferences, including OpenWorld 2012 in October in San Francisco. Stay tuned to find out when our recruiters are ready to sign up participants, or leave a comment below to find out whether an on-site lab will be at your next conference. For information on the following presentations, which will be delivered by Apps UX team members, check the Usable Apps Events page. • The Fusion Applications User Experience: Transforming Work into Insight • Customizations Under the Covers – Making Fusion Applications Your Own • OAUG Fusion Middleware SIG (FMWSIG) • 18 Months with Fusion Applications – Stories From The Trenhes • PeopleTools Tips and Techniques

    Read the article

  • How to undo a changeset using tf.exe rollback

    - by Tarun Arora
    Technorati Tags: Team Foundation Server 2010,Team Foundation Utilities,TFS2010   Oh no! Did you just check in a changeset in to TFS and realized that you need to roll back the changeset because the changes were suppose to go in a different branch? Or did you just accidently merge a wrong changeset in your release branch? There are several ways to undo the damage, Manual: Yes, we all just hate this word but for the record you could manually rollback the changes. Get Specific version on the branch and chose the changeset prior to the one you checked in. After that check out all the files in the changeset and check them in. During the check in you will receive a conflict. At this point choose ‘Keep local changes’ in the conflict resolution window and check in the files. Automated: Yes, we just love it! TFS comes with a very powerful command line utility ‘tf.exe’ that gives you the ability to rollback the effects of one or more changesets to one or more version-controlled items. This command does not remove the changesets from an item's version history. Instead, this command creates in your workspace a set of pending changes that negate the effects of the changesets that you specify. Syntax tf rollback /toversion:VersionSpec ItemSpec [/recursive] [/lock:none|checkin|checkout] [/version:versionspec] [/keepmergehistory] [/login:username,[password]] [/noprompt] tf rollback /changeset:ChangesetFrom~ChangesetTo [ItemSpec] [/recursive] [/lock:none|checkin|checkout] [/version:VersionSpec] [/keepmergehistory] [/noprompt] [/login:username,[password]]   I’ll explain this with an example. Your workspace is at the location C:\myWorkspace You want to rollback changeset # 145621 C:\Workspace\MyBranch>tf.exe rollback /changeset:145621 /recursive How do i rollback/undo a series of changesets? You can also rollback a range of changesets by using the following C:\Workspace\MyBranch>tf.exe rollback /changeset:145601~145621 /recursive This will check out the files in the version control and you should be able to see them in the pending changes. Go on check them in to undo the specific changeset that you just rolled back. Do you completely want to get rid of the changeset from all future merges between the two branches? /KeepMergeHistory: This option has an effect only if one or more of the changesets that you are rolling back include a branch or merge change. Specify this option if you want future merges between the same source and the same target to exclude the changes that you are rolling back. Errors “If you get the message ‘Unable to determine the workspace.’ You may be able to correct this by running ‘tf worksapces /collection:TeamProjectCollectionUrl’” you are in the wrong directory. Make sure that you run the ‘tf rollback’ command from the directory of your workspace.   Status Exit Code Description 0 The operation rolled back all items successfully. 1 The operation rolled back at least one item successfully but could not roll back one or more items. 100 The operation could not roll back any items.   To use the command you must have the Read, Check Out, and Check In permissions set to Allow. So, have you been in a rollback undo situation before?   Share this post :

    Read the article

  • Know your Data Lineage

    - by Simon Elliston Ball
    An academic paper without the footnotes isn’t an academic paper. Journalists wouldn’t base a news article on facts that they can’t verify. So why would anyone publish reports without being able to say where the data has come from and be confident of its quality, in other words, without knowing its lineage. (sometimes referred to as ‘provenance’ or ‘pedigree’) The number and variety of data sources, both traditional and new, increases inexorably. Data comes clean or dirty, processed or raw, unimpeachable or entirely fabricated. On its journey to our report, from its source, the data can travel through a network of interconnected pipes, passing through numerous distinct systems, each managed by different people. At each point along the pipeline, it can be changed, filtered, aggregated and combined. When the data finally emerges, how can we be sure that it is right? How can we be certain that no part of the data collection was based on incorrect assumptions, that key data points haven’t been left out, or that the sources are good? Even when we’re using data science to give us an approximate or probable answer, we cannot have any confidence in the results without confidence in the data from which it came. You need to know what has been done to your data, where it came from, and who is responsible for each stage of the analysis. This information represents your data lineage; it is your stack-trace. If you’re an analyst, suspicious of a number, it tells you why the number is there and how it got there. If you’re a developer, working on a pipeline, it provides the context you need to track down the bug. If you’re a manager, or an auditor, it lets you know the right things are being done. Lineage tracking is part of good data governance. Most audit and lineage systems require you to buy into their whole structure. If you are using Hadoop for your data storage and processing, then tools like Falcon allow you to track lineage, as long as you are using Falcon to write and run the pipeline. It can mean learning a new way of running your jobs (or using some sort of proxy), and even a distinct way of writing your queries. Other Hadoop tools provide a lot of operational and audit information, spread throughout the many logs produced by Hive, Sqoop, MapReduce and all the various moving parts that make up the eco-system. To get a full picture of what’s going on in your Hadoop system you need to capture both Falcon lineage and the data-exhaust of other tools that Falcon can’t orchestrate. However, the problem is bigger even that that. Often, Hadoop is just one piece in a larger processing workflow. The next step of the challenge is how you bind together the lineage metadata describing what happened before and after Hadoop, where ‘after’ could be  a data analysis environment like R, an application, or even directly into an end-user tool such as Tableau or Excel. One possibility is to push as much as you can of your key analytics into Hadoop, but would you give up the power, and familiarity of your existing tools in return for a reliable way of tracking lineage? Lineage and auditing should work consistently, automatically and quietly, allowing users to access their data with any tool they require to use. The real solution, therefore, is to create a consistent method by which to bring lineage data from these data various disparate sources into the data analysis platform that you use, rather than being forced to use the tool that manages the pipeline for the lineage and a different tool for the data analysis. The key is to keep your logs, keep your audit data, from every source, bring them together and use the data analysis tools to trace the paths from raw data to the answer that data analysis provides.

    Read the article

  • How Does a 724% Return on Your Salesforce Automation Investment Sound?

    - by Brian Dayton
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Oracle Sales Cloud and Marketing Cloud customer Apex IT gained just that, a 724% return on investment (ROI) when they implemented these Oracle Cloud solutions in their fast-moving, rapidly-growing business. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Congratulations Apex IT! Apex IT was just announced as a winner of the Nucleus Research 11th annual Technology ROI Awards. The award, given by the analyst firm highlights organizations that have successfully leveraged IT deployments to maximize value per dollar spent. Fast Facts: Return on Investment - 724% Payback - 2 months Average annual benefit - $91,534 Cost: Benefit Ratio – 1:48 Business Benefits In addition to the ROI and cost metrics the award calls out improvements in Apex IT’s business operations—across both Sales and Marketing teams: Improved ability to identify new opportunities and focus sales resources on higher-probability deals Reduced administration and manual lead tracking—resulting in more time selling and a net new client increase of 46% Increased campaign productivity for both Marketing and Sales, including Oracle Marketing Cloud’s automation of campaign tracking and nurture programs Improved margins with more structured and disciplined sales processes—resulting in more effective deal negotiations Please join us in congratulating Apex IT on this award and their business achievements. Want More Details? Don’t take our word for it. Read the full Apex IT ROI Case Study and learn more about Apex IT’s business—including their work with Oracle Sales and Marketing Cloud on behalf of their clients in leading Sales organizations. Learn More About Oracle Sales Cloud www.oracle.com/salescloud www.facebook.com/oraclesalescloud www.youtube.com/oraclesalescloud Oracle Customer Experience and Complementary Sales Solutions Oracle Configure, Price and Quote (CPQ) Cloud Oracle Marketing Cloud Oracle Customer Experience /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Procedual level generation for a platformer game (tilebased) using player physics

    - by Notbad
    I have been searching for information about how to build a 2d world generator (tilebased) for a platformer game I am developing. The levels should look like dungeons with a ceiling and a floor and they will have a high probability of being just made of horizontal rooms but sometimes they can have exits to a top/down room. Here is an example of what I would like to achieve. I'm refering only to the caves part. I know level design won't be that great when generated but I think it is possible to have something good enough for people to enjoy the procedural maps (Note: Supermetrod Spoiler!): http://www.snesmaps.com/maps/SuperMetroid/SuperMetroidMapNorfair.html Well, after spending some time thinking about this I have some ideas to create the maps that I would like to share with you: 1) I have read about celular automatas and I would like to use them to carve the rooms but instead of carving just a tile at once I would like to carve full columns of tiles. Of course this carving system will have some restrictions like how many tiles must be left for the roof and the ceiling, etc... This way I could get much cleaner rooms than using the ussual automata. 2) I want some branching into the rooms. It will have little probability to happen but I definitely want it. Thinking about carving I came to the conclusion that I could be using some sort of path creation algorithm that the carving system would follow to create a path in the rooms. This could be more noticiable if we make the carving system to carve columns with the height of a corridor or with the height of a wide room (this will be added to the system as a param). This way at some point I could spawn a new automa beside the main one to create braches. This new automata should play side by side with the first one to create dead ends, islands (both paths created by the automatas meet at some point or lead to the same room. It would be too long to explain here all the tests I have done, etc... just will try to summarize the problems to see if anyone could bring some light to solve them (I don't mind sharing my successes but I think they aren't too relevant): 1) Zone reachability: How can I make sure that the player will be able to reach all zones I created (mainly when branches happen or vertical rooms are created). When branches are created I have to make sure that there will be a way to get onto the new created branch. I mean a bifurcation that the player could follow. Player will follow the main path or jump to a platform to get onto the other way). On the other hand if an island is created by the meeting of both branches I need to make sure the player will be able to get onto the island too. 2) When a branch is created and corridors are generated for each branch how can I make then both merge or repel to create an island or just make them separated corridors. 3) When I create a branch and an island is created becasue both corridors merge at somepoint or they lead to the same room, is there any way to detect this and randomize where to create the needed platforms to get onto the created isle? This platforms could be created at the start of the island or at the end. I guess part of the problem could be solved using some sort of graph following the created paths but I'm a bit lost in this sea of precedural content creation :). On the other hand I don't expect a solution to the problem but some information to get me moving forward again. Thanks in advance.

    Read the article

  • College Ratings via the Federal Government

    - by user9147039
    A few weeks back you might remember news about a higher education rating system proposal from the Obama administration. As I've discussed previously, political and stakeholder pressures to improve outcomes and increase transparency are stronger than ever before. The executive branch proposal is intended to make progress in this area. Quoting from the proposal itself, "The ratings will be based upon such measures as: Access, such as percentage of students receiving Pell grants; Affordability, such as average tuition, scholarships, and loan debt; and Outcomes, such as graduation and transfer rates, graduate earnings, and advanced degrees of college graduates.” This is going to be quite complex, to say the least. Most notably, higher ed is not monolithic. From community and other 2-year colleges, to small private 4-year, to professional schools, to large public research institutions…the many walks of higher ed life are, well, many. Designing a ratings system that doesn't wind up with lots of unintended consequences and collateral damage will be difficult. At best you would end up potentially tarnishing the reputation of certain institutions that were actually performing well against the metrics and outcome measures that make sense in their "context" of education. At worst you could spend a lot of time and resources designing a system that would lose credibility with its "customers". A lot of institutions I work with already have in place systems like the one described above. They are tracking completion rates, completion timeframes, transfers to other institutions, job placement, and salary information. As I talk to these institutions there are several constants worth noting: • Deciding on which metrics to measure is complicated. While employment and salary data are relatively easy to track, qualitative measures are more difficult. How do you quantify the benefit to someone who studies in one field that may not compensate him or her as well as another field but that provides huge personal fulfillment and reward is a difficult measure to quantify? • The data is available but the systems to transform the data into actual information that can be used in meaningful ways are not. Too often in higher ed information is siloed. As such, much of the data that need to be a part of a comprehensive system sit in multiple organizations, oftentimes outside the reach of core IT. • Politics and culture are big barriers. One of the areas that my team and I spend a lot of time talking about with higher ed institutions all over the world is the imperative to optimize for student success. This, like the tracking of the students’ achievement after graduation, requires a level or organizational capacity that does not currently exist. The primary barrier is the culture of "data islands" in higher ed, and the need for leadership to drive out the divisions between departments, schools, colleges, etc. and institute academy-wide analytics and data stewardship initiatives that will enable student success. • Data quality is a very big issue. So many disparate systems exist (some on premise, some "in the cloud") that keep data about "persons" using different means to identify them. Establishing a single source of truth about an individual and his or her data is difficult without some type of data quality policy and tools. Good tools actually exist but are seldom leveraged. Don't misunderstand - I think it's a great idea to drive additional transparency and accountability into the system of higher education. And not just at home, but globally. Students and parents need access to key data to make informed, responsible choices. The tools exist to not only enable this kind of information to be shared but to capture the very metrics stakeholders care most about and in a way that makes sense in the context of a given institution's "place" in the overall higher ed panoply.

    Read the article

  • Mercurial fails while commiting/updating/etc. using Mercuriual+TrueCrypt+MAC

    - by lukewar
    While trying to work with Mercurial on project located on TrueCrypt partition I always get en error as follows: ** unknown exception encountered, details follow ** report bug details to http://mercurial.selenic.com/bts/ ** or [email protected] ** Mercurial Distributed SCM (version 1.5.2+20100502) ** Extensions loaded: Traceback (most recent call last): File "/usr/local/bin/hg", line 27, in mercurial.dispatch.run() File "/Library/Python/2.6/site-packages/mercurial/dispatch.py", line 16, in run sys.exit(dispatch(sys.argv[1:])) File "/Library/Python/2.6/site-packages/mercurial/dispatch.py", line 30, in dispatch return _runcatch(u, args) File "/Library/Python/2.6/site-packages/mercurial/dispatch.py", line 50, in _runcatch return _dispatch(ui, args) File "/Library/Python/2.6/site-packages/mercurial/dispatch.py", line 470, in _dispatch return runcommand(lui, repo, cmd, fullargs, ui, options, d) File "/Library/Python/2.6/site-packages/mercurial/dispatch.py", line 340, in runcommand ret = _runcommand(ui, options, cmd, d) File "/Library/Python/2.6/site-packages/mercurial/dispatch.py", line 521, in _runcommand return checkargs() File "/Library/Python/2.6/site-packages/mercurial/dispatch.py", line 475, in checkargs return cmdfunc() File "/Library/Python/2.6/site-packages/mercurial/dispatch.py", line 469, in d = lambda: util.checksignature(func)(ui, *args, **cmdoptions) File "/Library/Python/2.6/site-packages/mercurial/util.py", line 401, in check return func(*args, **kwargs) File "/Library/Python/2.6/site-packages/mercurial/commands.py", line 3332, in update return hg.update(repo, rev) File "/Library/Python/2.6/site-packages/mercurial/hg.py", line 362, in update stats = _merge.update(repo, node, False, False, None) File "/Library/Python/2.6/site-packages/mercurial/merge.py", line 495, in update _checkunknown(wc, p2) File "/Library/Python/2.6/site-packages/mercurial/merge.py", line 77, in _checkunknown for f in wctx.unknown(): File "/Library/Python/2.6/site-packages/mercurial/context.py", line 660, in unknown return self._status[4] File "/Library/Python/2.6/site-packages/mercurial/util.py", line 156, in get result = self.func(obj) File "/Library/Python/2.6/site-packages/mercurial/context.py", line 622, in _status return self._repo.status(unknown=True) File "/Library/Python/2.6/site-packages/mercurial/localrepo.py", line 1023, in status if (f not in ctx1 or ctx2.flags(f) != ctx1.flags(f) File "/Library/Python/2.6/site-packages/mercurial/context.py", line 694, in flags flag = findflag(self._parents[0]) File "/Library/Python/2.6/site-packages/mercurial/context.py", line 690, in findflag return ff(path) File "/Library/Python/2.6/site-packages/mercurial/dirstate.py", line 145, in f if 'x' in fallback(x): TypeError: argument of type 'NoneType' is not iterable It is worth mention that Mercurial works perfectly if project is not located on TrueCrypt partition. Configuration: MacOS X 10.6.3 Mercurial Distributed SCM (version 1.5.2+20100502) Python 2.6.5 Have anyone of you generous people able to help me? :)

    Read the article

  • Help with SqlCeChangeTracking

    - by MusiGenesis
    I'm trying to use a new class in SqlCe 3.5 SP2 called SqlCeChangeTracking. This class (allegedly) lets you turn on change tracking on a table, without using RDA replication or Sync Services. Assuming you have an open SqlCeConnection, you enable change tracking on a table like this: SqlCeChangeTracking tracker = new SqlCeChangeTracking(conn); tracker.EnableTracking(TableName, TrackingKeyType.PrimaryKey, TrackingOptions.All); This appears to work, sort of. When I open the SDF file and view it in SQL Server Management Studio, the table has three additional fields: __sysChangeTxBsn, __sysInsertTxBsn and __sysTrackingContext. According to the sparse documentation, these columns (along with the __sysOCSDeletedRows system table) are used to track changes. The problem is that these three columns always contain NULL values for all rows, no matter what I do. I can add, delete, edit etc. and those columns remain NULL no matter what (and no deleted records ever show up in __sysOCSDeletedRows). I have found virtually no documentation on this class at all, and the promised MSDN API appears non-existent. Anybody know how to use this class?

    Read the article

  • Integrating Google Analytics into GWT application

    - by Domchi
    This should be totally simple but I can't get it working no matter what I try. I'm trying to use Google Analytics with GWT application. From what I understood, there are two way to do it: First is synchronous, by inserting tracking code at the end of <head> section HTML page and then calling this method: public static native void recordAnalyticsHit(String pageName) /*-{ pageTracker._trackPageview(pageName); }-*/; Second is asynchronous, by inserting tracking code just after <body> tag and then calling this method: public static native void recordAnalyticsHit(String pageName) /*-{ _gaq.push(['_trackPageview(' + pageName + ')']); }-*/; When running each of those methods, however, I get this exceptions in hosted mode: [ERROR] [myproject] Uncaught exception escaped com.google.gwt.core.client.JavaScriptException: (ReferenceError): pageTracker is not defined [ERROR] [myproject] Uncaught exception escaped com.google.gwt.core.client.JavaScriptException: (ReferenceError): _gaq is not defined When observing site in Firebug, I see that ga.js gets loaded, but that's about it. Did anyone get Analytics working with GWT? Also, does _gaq accept page name as trackPageview parameter, since all the examples I've seen use this call: _gaq.push(['_trackPageview()']); (Of course, that also doesn't work for me.)

    Read the article

  • kanban scrumish tool(s) to get started

    - by Davide
    After investigating a little bit scrum and kanban, I finally read this answer and decided to start using kanban, picking something from scrum (note that I'm working mostly by myself, and I do have read this question and its answers). Now, my question is: which tool would be best to get started? whiteboard and postit agilezen.com JIRA with greenhopper a spreadsheet (possibly on Google Docs) brightgreenprojects.com Agilo Target Process something else (please specify) Notes about each: I would lean towards the whiteboard, but there are several drawbacks (e.g. cannot make automatic charts, time measurements, metrics, and sometimes I work from home - where I need it most - and it's not convenient to carry :-) I don't want to remember another username/password (I promised to myself to signup only to OpenID-enabled services) My employer has JIRA but my group doesn't use it - I might ask for an account (it shouldn't require another password) and maybe later involve the rest of the group. But I don't know if they are using greenhopper and if it's a big deal installing it. I generally hate spreadsheets maybe overkill? I'd be happy to have a localhost instance, but it could be problematic to give access to the whole group (per network/firewalls) - not a deal-breaker but surely a concern What I'd like to get from this? being more productive tracking how much time I spend in any given task, possibly discussing the issue with my supervisor tracking what "blocks" me most often immediately see where I am compared to my schedule manage in a better way my long todo list (e.g. answering faster to the "what I should do next?" question) Do you have any suggestion? Note on the scrumish tag: read the Henrik Kniberg's PDF. He first introduced the definition of scrumish on page 9.

    Read the article

  • Complete list of tools and technologies that make up a solid ASP.NET MVC 2 development environment f

    - by Dr Dork
    This question is related to another wiki I found on SO, but I'd like to develop a more comprehensive example of an automated ASP MVC 2 development environment that can be used to develop and deploy a wide range of small-scale websites by beginners. As far as characteristics of the dev environment go, I'd like to focus on beginner-friendly over powerful since the other wiki focuses more on advanced, powerful setups. This information is targeted for beginners (that already know C# and understand web dev concepts) that have selected... ASP.NET MVC 2 as their dev framework Visual Studio 2010 Pro (or 2008 Pro SP1) as their IDE Windows 7 as their OS and are looking for a quick and easy-to-setup environment that covers managing, building, testing, tracking, and deploying their website with as much automation as possible. A system that can be used for becoming familiar with the whole process, as well as a launching point for exploring other, more custom and powerful systems. Since we've already selected the Compiler, Framework, and OS, I'd like to develop ideas for... Code editor (unless you feel VS will suffice for all areas of code) Database and related tools Unit testing (VS?) Continuous integration build system (VS?) Project Planning Issue tracking Deployment (VS?) Source management (VS?) ASP, C#, VS, and related blogs that beginners can follow Any other categories I'm probably missing Since we're already using Visual Studio, I'd like to focus on the out-of-the-box solutions and features built into Visual Studio, unless you feel there are better solutions that work well with VS and are easier to use than the features built directly into VS. Thanks so much in advance for your wisdom!

    Read the article

  • Multiple mesh with one geometry and diferent textures. Error

    - by user1821834
    I have a loop where I create a multiple Mesh with different geometry, because each mesh has one texture: .... var geoCube = new THREE.CubeGeometry(voxelSize, voxelSize, voxelSize); var geometry = new THREE.Geometry(); for( var i = 0; i < voxels.length; i++ ){ var voxel = voxels[i]; var object; color = voxel.color; texture = almacen.textPlaneTexture(voxel.texto,color,voxelSize); //Return the texture with a color and a text for each face of the geometry material = new THREE.MeshBasicMaterial({ map: texture }); object = new THREE.Mesh(geoCube, material); THREE.GeometryUtils.merge( geometry, object ); } //Add geometry merged at scene mesh = new THREE.Mesh( geometry, new THREE.MeshFaceMaterial() ); mesh.geometry.computeFaceNormals(); mesh.geometry.computeVertexNormals(); mesh.geometry.computeTangents(); scene.add( mesh ); .... But now I have this error in the javascript code Three.js Uncaught TypeError: Cannot read property 'map' of undefined In the funcion: function bufferGuessUVType ( material ) { .... } Update: Finally I have removed the merged solution and I can use an unnique geometry for the all voxels. Altough I think that If I use merge meshes the app would have a better performance...

    Read the article

  • Using a dynamic <script> (a <script> appended to the DOM by JavaScript) to load and initialize Click

    - by Bungle
    I'm creating an HTML/JavaScript widget to be used on third-party sites. This widget is generated by a <script> that our customers will insert on their page. The <script> creates an <iframe> in the customer's domain, and then creates and inserts all of that <iframe>'s content using JavaScript. It's important that this <iframe> contain Clicky's tracking code to monitor clicks on outbound links. Unfortunately, I'm not having any luck getting Clicky to work when I append the requisite <script> elements to the <iframe> using JavaScript. I first tried simply appending the Clicky tracking code to the <iframe> after appending some test outbound links, hoping that Clicky could attach to those automatically as it does on a static page. That didn't seem to work, so my next inclination was to use the "advanced_disable" custom option and use clicky.log() on the links I want to track. Here's a link to a test page that's along those lines: http://onespot.wsj.com/static/clicky_iframe_test.html When clicking a link on that test page, the action is not logged in Clicky, and a JavaScript error appears: clicky is not defined This ("clicky") appears to be defined in http://static.getclicky.com/js, which I confirmed through the Firebug console is indeed loading before I click a test outbound link. Has anyone successfully loaded Clicky in this way? If so, could you provide some sample code, a link to a working implementation, or some feedback on what's wrong with my code? I would also be interested to know if this is even possible. Thanks very much for any help or advice!

    Read the article

  • loading js files and other dependent js files asynchronously

    - by taber
    I'm looking for a clean way to asynchronously load the following types of javascript files: a "core" js file (hmm, let's just call it, oh i don't know, "jquery!" haha), x number of js files that are dependent on the "core" js file being loaded, and y number of other unrelated js files. I have a couple ideas of how to go about it, but not sure what the best way is. I'd like to avoid loading scripts in the document body. So for example, I want the following 4 javascript files to load asynchronously, appropriately named: /js/my-contact-page-js-functions.js // unrelated/independent script /js/jquery-1.3.2.min.js // the "core" script /js/jquery.color.min.js // dependent on jquery being loaded http://thirdparty.com/js/third-party-tracking-script.js // another unrelated/independent script But this won't work because it's not guaranteed that jQuery is loaded before the color plugin... (function() { a=[ '/js/my-contact-page-functions.js', '/js/jquery-1.4.2.min.js', '/js/jquery.color.js', 'http://cdn.thirdparty.com/third-party-tracking-script.js', ], d=document, h=d.getElementsByTagName('head')[0], s, i, l=a.length; for(i=0;i<l;i++){ s=d.createElement('script'); s.type='text/javascript'; s.async=true; s.src=a[i]; h.appendChild(s); } })(); Is it pretty much not possible to load jquery and the color plugin asynchronously? (Since the color plugin requires that jQuery is loaded first.) The first method I was considering is to just combine the color plugin script with jQuery source into one file. Then another idea I had was loading the color plugin like so: $(window).ready(function() { $.getScript("/js/jquery.color.js"); }); Anyone have any thoughts on how you'd go about this? Thanks!

    Read the article

  • How does lucene index documents?

    - by Mehdi Amrollahi
    Hello, I read some document about Lucene; also I read the document in this link (http://lucene.sourceforge.net/talks/pisa). I don't really understand how Lucene indexes documents and don't understand which algorithms Lucene uses for indexing? On the above link, it says Lucene uses this algorithm for indexing: incremental algorithm: maintain a stack of segment indices create index for each incoming document push new indexes onto the stack let b=10 be the merge factor; M=8 for (size = 1; size < M; size *= b) { if (there are b indexes with size docs on top of the stack) { pop them off the stack; merge them into a single index; push the merged index onto the stack; } else { break; } } How does this algorithm provide optimized indexing? Does Lucene use B-tree algorithm or any other algorithm like that for indexing - or does it have a particular algorithm? Thank you for reading my post.

    Read the article

  • JPA association table is not deletable

    - by Marcel
    Hi I have a problem with JPA (EclipseLink). I am not able to delete a association table. This is the situation: Product 1:n to ProductResource Resource 1:n to ProductResource I first set the product and resource attributes of ProductResource. If I then try to delete the ProductResource object nothing happens (no sql is generated - no exception). If I comment out both OneToMany annotations in ProductResource I can delete the object. I can also delete the object when product and resource attributes are not set. If I comment out only the annotation above the ressource attribut the ProductResource object gets deleted upon the deletion of the product object (cascade=CascadeType.ALL). I hope someone could give me a hint. Thank you. Product Resource: public class ProductResource implements Serializable { @ManyToOne(fetch=FetchType.EAGER, cascade=CascadeType.MERGE) private Product product; @ManyToOne(fetch=FetchType.EAGER, cascade=CascadeType.MERGE) private Resource resource; Product: public class Product implements Serializable { @OneToMany(mappedBy="product", fetch=FetchType.EAGER, cascade=CascadeType.ALL) private List<ProductResource> productResources = new ArrayList<ProductResource>(); Resource: public class Resource implements Serializable { @OneToMany(mappedBy="resource", fetch=FetchType.EAGER, cascade=CascadeType.ALL) private List<ProductResource> productResources = new ArrayList<ProductResource>(); Greetings Marcel

    Read the article

  • Source Control - XCode - Visual Studio 2005/2008 / 2010

    - by Mick Walker
    My apologies if this has been asked before, I wasnt quite sure if this question should be asked on a programming forum, as it more relates to programming environment than a particular technology, so please accept my (double) appologies if I am posting this in the wrong place, my logic in this case was if it effects the code I write, then this is the place for it. At home, I do a lot of my development on a Mac Pro, I do development for the Mac, iPhone and Windows on this machine (Xcode & Visual Studio - (multiple versions installed in bootcamp, but generally I run it via Parallels)). When visiting a client, I have a similar setup, but on my MacBook Pro. What I want is a source control solution to install on the Mac Pro, that will support both XCode and multiple versions of visual studio, so that when I visit a client, I can simply grab the latest copy from source control via the MacBook Pro. Whilst visiting the client, he / she may suggest changes, and minor ones I would tend to make on site, so I need the ability to merge any modified code back into the trunk of the project / solution when I return home. At the moment, I am using no source control at all, and rely on simply coping folders and overwriting them when I return from a client- thats my 'merge'!!! I was wondering if anyone had any ideas of a source provider I could use, which would support both Windows and Mac development environments, and is cheap (free would be better).

    Read the article

  • SQL SERVER 2008 JOIN hints

    - by Nai
    Hi all, Recently, I was trying to optimise this query UPDATE Analytics SET UserID = x.UserID FROM Analytics z INNER JOIN UserDetail x ON x.UserGUID = z.UserGUID Estimated execution plan show 57% on the Table Update and 40% on a Hash Match (Aggregate). I did some snooping around and came across the topic of JOIN hints. So I added a LOOP hint to my inner join and WA-ZHAM! The new execution plan shows 38% on the Table Update and 58% on an Index Seek. So I was about to start applying LOOP hints to all my queries until prudence got the better of me. After some googling, I realised that JOIN hints are not very well covered in BOL. Therefore... Can someone please tell me why applying LOOP hints to all my queries is a bad idea. I read somewhere that a LOOP JOIN is default JOIN method for query optimiser but couldn't verify the validity of the statement? When are JOIN hints used? When the sh*t hits the fan and ghost busters ain't in town? What's the difference between LOOP, HASH and MERGE hints? BOL states that MERGE seems to be the slowest but what is the application of each hint? Thanks for your time and help people! I'm running SQL Server 2008 BTW. The statistics mentioned above are ESTIMATED execution plans.

    Read the article

  • Hadoop streaming with Python and python subprocess

    - by Ganesh
    I have established a basic hadoop master slave cluster setup and able to run mapreduce programs (including python) on the cluster. Now I am trying to run a python code which accesses a C binary and so I am using the subprocess module. I am able to use the hadoop streaming for a normal python code but when I include the subprocess module to access a binary, the job is getting failed. As you can see in the below logs, the hello executable is recognised to be used for the packaging, but still not able to run the code. . . packageJobJar: [/tmp/hello/hello, /app/hadoop/tmp/hadoop-unjar5030080067721998885/] [] /tmp/streamjob7446402517274720868.jar tmpDir=null JarBuilder.addNamedStream hello . . 12/03/07 22:31:32 INFO mapred.FileInputFormat: Total input paths to process : 1 12/03/07 22:31:32 INFO streaming.StreamJob: getLocalDirs(): [/app/hadoop/tmp/mapred/local] 12/03/07 22:31:32 INFO streaming.StreamJob: Running job: job_201203062329_0057 12/03/07 22:31:32 INFO streaming.StreamJob: To kill this job, run: 12/03/07 22:31:32 INFO streaming.StreamJob: /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=master:54311 -kill job_201203062329_0057 12/03/07 22:31:32 INFO streaming.StreamJob: Tracking URL: http://master:50030/jobdetails.jsp?jobid=job_201203062329_0057 12/03/07 22:31:33 INFO streaming.StreamJob: map 0% reduce 0% 12/03/07 22:32:05 INFO streaming.StreamJob: map 100% reduce 100% 12/03/07 22:32:05 INFO streaming.StreamJob: To kill this job, run: 12/03/07 22:32:05 INFO streaming.StreamJob: /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=master:54311 -kill job_201203062329_0057 12/03/07 22:32:05 INFO streaming.StreamJob: Tracking URL: http://master:50030/jobdetails.jsp?jobid=job_201203062329_0057 12/03/07 22:32:05 ERROR streaming.StreamJob: Job not Successful! 12/03/07 22:32:05 INFO streaming.StreamJob: killJob... Streaming Job Failed! Command I am trying is : hadoop jar contrib/streaming/hadoop-*streaming*.jar -mapper /home/hduser/MARS.py -reducer /home/hduser/MARS_red.py -input /user/hduser/mars_inputt -output /user/hduser/mars-output -file /tmp/hello/hello -verbose where hello is the C executable. It is a simple helloworld program which I am using to check the basic functioning. My Python code is : #!/usr/bin/env python import subprocess subprocess.call(["./hello"]) Any help with how to get the executable run with Python in hadoop streaming or help with debugging this will get me forward in this. Thanks, Ganesh

    Read the article

  • Doubts About Core Data NSManagedObject Deep Copy

    - by Jigzat
    Hello everyone, I have a situation where I must copy one NSManagedObject from the main context into an editing context. It sounds unnecessary to most people as I have seen in similar situations described in Stackoverflow but I looks like I need it. In my app there are many views in a tab bar and every view handles different information that is related to the other views. I think I need multiple MOCs since the user may jump from tab to tab and leave unsaved changes in some tab but maybe it saves data in some other tab/view so if that happens the changes in the rest of the views are saved without user consent and in the worst case scenario makes the app crash. For adding new information I got away by using an adding MOC and then merging changes in both MOCs but for editing is not that easy. I saw a similar situation here in Stackoverflow but the app crashes since my data model doesn't seem to use NSMutableSet for the relationships (I don't think I have a many-to-many relationship, just one-to-many) I think it can be modified so I can retrieve the relationships as if they were attributes for (NSString *attr in relationships) { [cloned setValue:[source valueForKey:attr] forKey:attr]; } but I don't know how to merge the changes of the cloned and original objects. I think I could just delete the object from the main context, then merge both contexts and save changes in the main context but I don't know if is the right way to do it. I'm also concerned about database integrity since I'm not sure that the inverse relationships will keep the same reference to the cloned object as if it were the original one. Can some one please enlighten me about this?

    Read the article

  • Subversion versus Vault

    - by WebDude
    I'm currently reviewing the benefits of moving from SVN to a SourceGear Vault. Has anyone got advice or a link to a detailed comparison between the two? Bear in mind I would have to move my current Source Control system across which works strongly in SVN's favor Here is some info I have found out thus far from my own investigations. I have been taking some time tests between the two and vault seems to perform most operations much faster. Time tests used the same server as the repository, the same workstation client, and the same project. Time Comparisons SVN Add/Commit    12:30 Get Latest Revision    5:35 Tagging/Labelling    0:01 Branching    N/A - I don't think true branching exists in SVN Vault Add/Commit    4:45 Get Latest Revision    0:51 Tagging/Labelling    0:30 Branching    3:23 (can't get this to format correctly) I also found an online source comparing some other points. This is the kind of information i'm looking for. Usage Comparisons Subversion is edit/merge/commit only. Vault allows you to do either edit/merge/commit or checkout/edit/checkin. Vault looks and acts just like VSS, which makes the learning curve effectively zero for VSS users. Vault has a VS plugin, but it only works if you're going to run in checkout-mode. Subversion has clients for pretty much every OS you can imagine; Vault has a GUI client for Windows and a command line client for Mono. Both will support remote work, since both use HTTP as their transport (Subversion uses extended DAV, Vault uses SOAP). Subversion installation, especially w/ Apache, is more complex. Subversion has a lot of third party support. Vault has just a few things. My question Has anyone got advice or a link to a detailed comparison between the two?

    Read the article

  • How can one setup a version control system on a local network, without a server?

    - by Andrew
    Edit: Ok so I learned that I guess I need an distributed source control, however are there any UI based ones, and do they allow you to merge with other users on the network? This is kind of a two part question, so here it goes. I want to start developing a web application at home (with multiple developers). However, I don't have a dedicated server nor want to pay for on. So first, I don't know which version control system to use for this case, as at work we mostly have TFS setup, so I am not to familiar with whats out there. What are the best free CVS/SVN tools out there? Second, is it possible to somehow setup the CVS/SVN where there is no dedicated server and both clients store up to one week of the source code from the last check-in? Also, it would be helpful if it could integrate with visual studio, again this isn't that important at all. Problem: There are Five users, one is a Server. Server Connected: All Ok Server Disconnected: No one can share. What I am looking for: No Server: Users still have versioning based on version id of last check-in. Users must check all version on network to make sure they aren't outdated based on their last version id. If not check-in, otherwise merge/get latest. If they are update checkin, and set current version id +1.

    Read the article

  • Using git pull to track a remote branch without merging

    - by J Barlow
    I am using git to track content which is changed by some people and shared "read-only" with others. The "readers" may from time to time need to make a change, but mostly they will not. I want to allow for the git "writers" to rebase pushed branches** if need be, and ensure that the "readers" never accidentally get a merge. That's normally easy enough. git pull origin +master There's one case that seems to cause problems. If a reader makes a local change, the command above will merge. I want pull to be fully automatic if the reader has not made local changes, while if they have made local changes, it should stop and ask for input. I want to track any upstream changes while being careful about merging downstream changes. In a way, I don't really want to pull. I want to track the master branch exactly. ** (I know this is not a best practice, but it seems necessary in our case: we have one main branch that contains most of the work and some topic branches for specific customers with minor changes that need to be isolated. It seems easiest to frequently rebase to keep the topics up to date.)

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >